Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137060 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-14213 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-verify-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-Yi0MVoirl3A6/agent.2140 SSH_AGENT_PID=2142 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_3550345780838948553.key (/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_3550345780838948553.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-verify-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/60/137060/2 # timeout=30 > git rev-parse b398b692983f0c3a8cd19dd7c46b3af8d1a0a146^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision b398b692983f0c3a8cd19dd7c46b3af8d1a0a146 (refs/changes/60/137060/2) > git config core.sparsecheckout # timeout=10 > git checkout -f b398b692983f0c3a8cd19dd7c46b3af8d1a0a146 # timeout=30 Commit message: "Add kafka support in K8s CSIT" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins14080630004799768180.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-l0SW lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH Generating Requirements File Python 3.10.6 pip 23.3.2 from /tmp/venv-l0SW/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.1 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.23 botocore==1.34.23 bs4==0.0.2 cachetools==5.3.2 certifi==2023.11.17 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.5.0 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.26.2 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.4 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.1.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.1.1 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2023.3.post1 PyYAML==6.0.1 referencing==0.32.1 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/sh /tmp/jenkins4579151614360948690.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/sh -xe /tmp/jenkins10776437951953663126.sh + /w/workspace/policy-pap-master-project-csit-verify-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-verify-pap + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.glEjOpyI3A ++ echo ROBOT_VENV=/tmp/tmp.glEjOpyI3A +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.glEjOpyI3A ++ source /tmp/tmp.glEjOpyI3A/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.glEjOpyI3A +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin +++ PATH=/tmp/tmp.glEjOpyI3A/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.glEjOpyI3A) ' '!=' x ']' +++ PS1='(tmp.glEjOpyI3A) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.glEjOpyI3A/src/onap ++ rm -rf /tmp/tmp.glEjOpyI3A/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.glEjOpyI3A/bin/activate + '[' -z /tmp/tmp.glEjOpyI3A/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.glEjOpyI3A/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.glEjOpyI3A ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ++ PATH=/tmp/tmp.glEjOpyI3A/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.glEjOpyI3A) ' ++ '[' 'x(tmp.glEjOpyI3A) ' '!=' x ']' ++ PS1='(tmp.glEjOpyI3A) (tmp.glEjOpyI3A) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.vXEmLvt3D4 + cd /tmp/tmp.vXEmLvt3D4 + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-verify-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-verify-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-verify-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-verify-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:eb47623eeab9aad8524ecc877b6708ae74b57f9f3cfe77554ad0d1521491cb5d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:8a0432281bb5edb6d25e3d0e62d78b6aebc2875f52ecd11259251b497208c04e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Creating compose_zookeeper_1 ... Creating prometheus ... Creating mariadb ... Creating simulator ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating simulator ... done Creating mariadb ... done Creating policy-db-migrator ... Creating compose_zookeeper_1 ... done Creating kafka ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds policy-api Up 13 seconds kafka Up 12 seconds grafana Up 18 seconds mariadb Up 16 seconds prometheus Up 19 seconds simulator Up 17 seconds compose_zookeeper_1 Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds policy-api Up 18 seconds kafka Up 17 seconds grafana Up 23 seconds mariadb Up 21 seconds prometheus Up 24 seconds simulator Up 22 seconds compose_zookeeper_1 Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds policy-api Up 23 seconds kafka Up 22 seconds grafana Up 28 seconds mariadb Up 26 seconds prometheus Up 29 seconds simulator Up 27 seconds compose_zookeeper_1 Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds policy-api Up 28 seconds kafka Up 27 seconds grafana Up 33 seconds mariadb Up 31 seconds prometheus Up 34 seconds simulator Up 32 seconds compose_zookeeper_1 Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 33 seconds kafka Up 32 seconds grafana Up 38 seconds mariadb Up 36 seconds prometheus Up 39 seconds simulator Up 37 seconds compose_zookeeper_1 Up 35 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 38 seconds kafka Up 37 seconds grafana Up 44 seconds mariadb Up 41 seconds prometheus Up 44 seconds simulator Up 42 seconds compose_zookeeper_1 Up 40 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 13:55:49 up 5 min, 0 users, load average: 3.33, 1.78, 0.77 Tasks: 200 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.7 us, 2.2 sy, 0.0 ni, 79.5 id, 7.5 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.6G 22G 1.3M 6.5G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 39 seconds kafka Up 38 seconds grafana Up 44 seconds mariadb Up 42 seconds prometheus Up 45 seconds simulator Up 43 seconds compose_zookeeper_1 Up 41 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 9c961486a35c policy-apex-pdp 1.91% 187.7MiB / 31.41GiB 0.58% 8.73kB / 8.27kB 0B / 0B 48 acbeea9b5acd policy-pap 3.26% 523.2MiB / 31.41GiB 1.63% 30.8kB / 34kB 0B / 180MB 61 0dae240d22d9 policy-api 0.47% 436.6MiB / 31.41GiB 1.36% 1MB / 737kB 0B / 0B 52 1b1ecfacb928 kafka 0.84% 381MiB / 31.41GiB 1.18% 74.6kB / 76.9kB 0B / 508kB 81 c25b2e6d31ff grafana 0.02% 53.89MiB / 31.41GiB 0.17% 19.6kB / 3.57kB 0B / 23.9MB 16 aea2f77a8936 mariadb 0.02% 102MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 48.5MB 38 16914ab7f65c prometheus 0.01% 18.48MiB / 31.41GiB 0.06% 28.7kB / 1.09kB 4.1kB / 0B 13 08c608326275 simulator 0.17% 122MiB / 31.41GiB 0.38% 1.31kB / 0B 0B / 0B 76 328e2f248c5c compose_zookeeper_1 0.11% 96.82MiB / 31.41GiB 0.30% 56.5kB / 49.9kB 98.3kB / 393kB 60 + echo + cd /tmp/tmp.vXEmLvt3D4 + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.vXEmLvt3D4/output.xml Log: /tmp/tmp.vXEmLvt3D4/log.html Report: /tmp/tmp.vXEmLvt3D4/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-verify-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes simulator Up 2 minutes compose_zookeeper_1 Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 13:57:40 up 7 min, 0 users, load average: 0.97, 1.48, 0.78 Tasks: 198 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.2 us, 1.8 sy, 0.0 ni, 82.7 id, 6.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.5G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes simulator Up 2 minutes compose_zookeeper_1 Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 9c961486a35c policy-apex-pdp 0.34% 185.7MiB / 31.41GiB 0.58% 57.1kB / 91.8kB 0B / 0B 50 acbeea9b5acd policy-pap 24.73% 486.1MiB / 31.41GiB 1.51% 2.34MB / 821kB 0B / 180MB 66 0dae240d22d9 policy-api 0.12% 472.7MiB / 31.41GiB 1.47% 2.49MB / 1.29MB 0B / 0B 53 1b1ecfacb928 kafka 1.52% 397.1MiB / 31.41GiB 1.23% 245kB / 219kB 0B / 606kB 83 c25b2e6d31ff grafana 0.01% 50.12MiB / 31.41GiB 0.16% 20.6kB / 4.61kB 0B / 23.9MB 16 aea2f77a8936 mariadb 0.01% 103.3MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 48.8MB 28 16914ab7f65c prometheus 0.20% 24.74MiB / 31.41GiB 0.08% 220kB / 11.7kB 4.1kB / 0B 13 08c608326275 simulator 0.09% 122MiB / 31.41GiB 0.38% 1.58kB / 0B 0B / 0B 76 328e2f248c5c compose_zookeeper_1 0.10% 96.84MiB / 31.41GiB 0.30% 59.4kB / 51.4kB 98.3kB / 393kB 60 + echo + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, policy-api, kafka, policy-db-migrator, grafana, mariadb, prometheus, simulator, compose_zookeeper_1 grafana | logger=settings t=2024-01-22T13:55:05.84993818Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z grafana | logger=settings t=2024-01-22T13:55:05.850147657Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-01-22T13:55:05.850155297Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-01-22T13:55:05.850158877Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-01-22T13:55:05.850161937Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-01-22T13:55:05.850165137Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-22T13:55:05.850169418Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-22T13:55:05.850172778Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-01-22T13:55:05.850177388Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-01-22T13:55:05.850181948Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-01-22T13:55:05.850185498Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-22T13:55:05.850189978Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-22T13:55:05.850195428Z level=info msg=Target target=[all] grafana | logger=settings t=2024-01-22T13:55:05.850207289Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-01-22T13:55:05.850210919Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-01-22T13:55:05.850213779Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-01-22T13:55:05.850217059Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-01-22T13:55:05.850221049Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-01-22T13:55:05.850225369Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-01-22T13:55:05.850524749Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-01-22T13:55:05.85054342Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-01-22T13:55:05.851092348Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-01-22T13:55:05.851918856Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-01-22T13:55:05.85265958Z level=info msg="Migration successfully executed" id="create migration_log table" duration=740.294µs grafana | logger=migrator t=2024-01-22T13:55:05.875536161Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-01-22T13:55:05.876617647Z level=info msg="Migration successfully executed" id="create user table" duration=1.081726ms grafana | logger=migrator t=2024-01-22T13:55:05.881059585Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-01-22T13:55:05.882162032Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.106896ms grafana | logger=migrator t=2024-01-22T13:55:05.885394259Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-01-22T13:55:05.886446184Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.051515ms grafana | logger=migrator t=2024-01-22T13:55:05.889419913Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-01-22T13:55:05.890060004Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=640.041µs grafana | logger=migrator t=2024-01-22T13:55:05.894475521Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-01-22T13:55:05.895061291Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=585.719µs grafana | logger=migrator t=2024-01-22T13:55:05.89803964Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-01-22T13:55:05.902400755Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.361766ms grafana | logger=migrator t=2024-01-22T13:55:05.905636362Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-01-22T13:55:05.906321865Z level=info msg="Migration successfully executed" id="create user table v2" duration=688.593µs grafana | logger=migrator t=2024-01-22T13:55:05.910518055Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-01-22T13:55:05.911184427Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=665.992µs grafana | logger=migrator t=2024-01-22T13:55:05.91397964Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-01-22T13:55:05.914912461Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=931.401µs grafana | logger=migrator t=2024-01-22T13:55:05.917916851Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-01-22T13:55:05.918528131Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=609.6µs grafana | logger=migrator t=2024-01-22T13:55:05.923506817Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-01-22T13:55:05.923994693Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=492.277µs grafana | logger=migrator t=2024-01-22T13:55:05.927316713Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-01-22T13:55:05.92841009Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.092677ms grafana | logger=migrator t=2024-01-22T13:55:05.930941604Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-01-22T13:55:05.930965355Z level=info msg="Migration successfully executed" id="Update user table charset" duration=24.111µs grafana | logger=migrator t=2024-01-22T13:55:05.934017456Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-01-22T13:55:05.935658521Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.640405ms grafana | logger=migrator t=2024-01-22T13:55:05.940501932Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-01-22T13:55:05.940689508Z level=info msg="Migration successfully executed" id="Add missing user data" duration=187.556µs grafana | logger=migrator t=2024-01-22T13:55:05.943437779Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-01-22T13:55:05.944529636Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.085807ms grafana | logger=migrator t=2024-01-22T13:55:05.947277497Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-01-22T13:55:05.947948049Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=670.242µs grafana | logger=migrator t=2024-01-22T13:55:05.951254869Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-01-22T13:55:05.953164753Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.909544ms grafana | logger=migrator t=2024-01-22T13:55:05.958264983Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-01-22T13:55:05.967660485Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.395002ms grafana | logger=migrator t=2024-01-22T13:55:05.970338444Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-01-22T13:55:05.971016597Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=680.383µs grafana | logger=migrator t=2024-01-22T13:55:05.973912983Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-01-22T13:55:05.974631447Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=714.644µs grafana | logger=migrator t=2024-01-22T13:55:05.97892651Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-01-22T13:55:05.979596762Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=670.042µs grafana | logger=migrator t=2024-01-22T13:55:05.982330383Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-01-22T13:55:05.983004115Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=671.832µs grafana | logger=migrator t=2024-01-22T13:55:05.986099528Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-01-22T13:55:05.987306558Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.20736ms grafana | logger=migrator t=2024-01-22T13:55:05.992044446Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-01-22T13:55:05.992082097Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=38.511µs grafana | logger=migrator t=2024-01-22T13:55:05.995505061Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-01-22T13:55:05.996559036Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.052085ms grafana | logger=migrator t=2024-01-22T13:55:06.000038742Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-01-22T13:55:06.00118146Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.147558ms grafana | logger=migrator t=2024-01-22T13:55:06.005892784Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-01-22T13:55:06.006536211Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=643.407µs grafana | logger=migrator t=2024-01-22T13:55:06.008613584Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-01-22T13:55:06.009248211Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=635.087µs grafana | logger=migrator t=2024-01-22T13:55:06.011915152Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-01-22T13:55:06.016375429Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.457807ms grafana | logger=migrator t=2024-01-22T13:55:06.020928839Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-01-22T13:55:06.021683688Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=754.509µs grafana | logger=migrator t=2024-01-22T13:55:06.024452891Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-01-22T13:55:06.02516037Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=706.559µs grafana | logger=migrator t=2024-01-22T13:55:06.028034615Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-01-22T13:55:06.028742314Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=707.409µs grafana | logger=migrator t=2024-01-22T13:55:06.031414054Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-01-22T13:55:06.032085701Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=671.247µs grafana | logger=migrator t=2024-01-22T13:55:06.036592599Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-01-22T13:55:06.03775491Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.163951ms grafana | logger=migrator t=2024-01-22T13:55:06.040782039Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-01-22T13:55:06.041380725Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=598.346µs grafana | logger=migrator t=2024-01-22T13:55:06.044290761Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-01-22T13:55:06.045071132Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=779.891µs grafana | logger=migrator t=2024-01-22T13:55:06.049661272Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-01-22T13:55:06.050000171Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=338.809µs grafana | logger=migrator t=2024-01-22T13:55:06.051975893Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-01-22T13:55:06.052558438Z level=info msg="Migration successfully executed" id="create star table" duration=582.665µs grafana | logger=migrator t=2024-01-22T13:55:06.055838174Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-01-22T13:55:06.057086307Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.247173ms grafana | logger=migrator t=2024-01-22T13:55:06.061815611Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-01-22T13:55:06.062875279Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.059178ms grafana | logger=migrator t=2024-01-22T13:55:06.066927275Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-01-22T13:55:06.067616903Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=689.418µs grafana | logger=migrator t=2024-01-22T13:55:06.070297903Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-01-22T13:55:06.070892739Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=589.936µs grafana | logger=migrator t=2024-01-22T13:55:06.073165439Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-01-22T13:55:06.073862187Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=696.608µs grafana | logger=migrator t=2024-01-22T13:55:06.077824111Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-01-22T13:55:06.078615111Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=790.59µs grafana | logger=migrator t=2024-01-22T13:55:06.081508687Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-01-22T13:55:06.082638567Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.1293ms grafana | logger=migrator t=2024-01-22T13:55:06.085742578Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-01-22T13:55:06.085782409Z level=info msg="Migration successfully executed" id="Update org table charset" duration=41.211µs grafana | logger=migrator t=2024-01-22T13:55:06.088066439Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-01-22T13:55:06.08810058Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=35.261µs grafana | logger=migrator t=2024-01-22T13:55:06.092202518Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-01-22T13:55:06.092392493Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=185.525µs grafana | logger=migrator t=2024-01-22T13:55:06.095393442Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-01-22T13:55:06.096517311Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.12337ms grafana | logger=migrator t=2024-01-22T13:55:06.100165557Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-01-22T13:55:06.101734868Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.568821ms grafana | logger=migrator t=2024-01-22T13:55:06.105179688Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-01-22T13:55:06.106463452Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.279324ms grafana | logger=migrator t=2024-01-22T13:55:06.11133605Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-01-22T13:55:06.112048968Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=712.248µs grafana | logger=migrator t=2024-01-22T13:55:06.115348315Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-01-22T13:55:06.116264799Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=905.483µs grafana | logger=migrator t=2024-01-22T13:55:06.119405191Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-01-22T13:55:06.120553951Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.14839ms grafana | logger=migrator t=2024-01-22T13:55:06.125364008Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-01-22T13:55:06.136037987Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=10.67076ms grafana | logger=migrator t=2024-01-22T13:55:06.139504968Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-01-22T13:55:06.140012492Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=510.944µs grafana | logger=migrator t=2024-01-22T13:55:06.197944131Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-01-22T13:55:06.199347798Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.403757ms grafana | logger=migrator t=2024-01-22T13:55:06.204332188Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-01-22T13:55:06.205454538Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.12363ms grafana | logger=migrator t=2024-01-22T13:55:06.208465817Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-01-22T13:55:06.208784925Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=319.158µs grafana | logger=migrator t=2024-01-22T13:55:06.212337038Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-01-22T13:55:06.213349375Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.015537ms grafana | logger=migrator t=2024-01-22T13:55:06.278176575Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-01-22T13:55:06.27837992Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=204.315µs grafana | logger=migrator t=2024-01-22T13:55:06.28180191Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-01-22T13:55:06.284657195Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.859615ms grafana | logger=migrator t=2024-01-22T13:55:06.288023363Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-01-22T13:55:06.289753649Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.730016ms grafana | logger=migrator t=2024-01-22T13:55:06.294024121Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.296700391Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.6748ms grafana | logger=migrator t=2024-01-22T13:55:06.300706006Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.301904207Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.197581ms grafana | logger=migrator t=2024-01-22T13:55:06.305556933Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.308899171Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.342198ms grafana | logger=migrator t=2024-01-22T13:55:06.313197253Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.313987234Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=789.711µs grafana | logger=migrator t=2024-01-22T13:55:06.317075585Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-01-22T13:55:06.317839875Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=764.39µs grafana | logger=migrator t=2024-01-22T13:55:06.320635719Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-01-22T13:55:06.320662429Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=27.51µs grafana | logger=migrator t=2024-01-22T13:55:06.324320075Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-01-22T13:55:06.324345146Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=27.211µs grafana | logger=migrator t=2024-01-22T13:55:06.327298623Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.330367234Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.067761ms grafana | logger=migrator t=2024-01-22T13:55:06.335420296Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.338705512Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.289186ms grafana | logger=migrator t=2024-01-22T13:55:06.34280451Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.344739441Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.934331ms grafana | logger=migrator t=2024-01-22T13:55:06.347500843Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.349475545Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.015323ms grafana | logger=migrator t=2024-01-22T13:55:06.352619407Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.352993857Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=393.58µs grafana | logger=migrator t=2024-01-22T13:55:06.355988376Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-01-22T13:55:06.357347341Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.358825ms grafana | logger=migrator t=2024-01-22T13:55:06.361253424Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-01-22T13:55:06.362373583Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.119599ms grafana | logger=migrator t=2024-01-22T13:55:06.365579097Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-01-22T13:55:06.365608288Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=29.951µs grafana | logger=migrator t=2024-01-22T13:55:06.368923755Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-01-22T13:55:06.369702495Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=778.26µs grafana | logger=migrator t=2024-01-22T13:55:06.373452044Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-01-22T13:55:06.374497051Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.049408ms grafana | logger=migrator t=2024-01-22T13:55:06.377778887Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-01-22T13:55:06.386049104Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=8.270877ms grafana | logger=migrator t=2024-01-22T13:55:06.389095284Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-01-22T13:55:06.389727261Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=635.496µs grafana | logger=migrator t=2024-01-22T13:55:06.394193968Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-01-22T13:55:06.394933387Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=735.629µs grafana | logger=migrator t=2024-01-22T13:55:06.39809139Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-01-22T13:55:06.399503567Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.411887ms grafana | logger=migrator t=2024-01-22T13:55:06.402715021Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-01-22T13:55:06.403005479Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=290.518µs grafana | logger=migrator t=2024-01-22T13:55:06.406384077Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-01-22T13:55:06.406928262Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=539.734µs grafana | logger=migrator t=2024-01-22T13:55:06.410724321Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-01-22T13:55:06.4129721Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.238469ms grafana | logger=migrator t=2024-01-22T13:55:06.415674421Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-01-22T13:55:06.417638973Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.950471ms grafana | logger=migrator t=2024-01-22T13:55:06.420707043Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-01-22T13:55:06.420899758Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=192.805µs grafana | logger=migrator t=2024-01-22T13:55:06.423474576Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-01-22T13:55:06.42363871Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=163.494µs grafana | logger=migrator t=2024-01-22T13:55:06.425799907Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-01-22T13:55:06.426647469Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=847.643µs grafana | logger=migrator t=2024-01-22T13:55:06.429432622Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-01-22T13:55:06.43165451Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.220838ms grafana | logger=migrator t=2024-01-22T13:55:06.434438043Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-01-22T13:55:06.435340467Z level=info msg="Migration successfully executed" id="create data_source table" duration=901.954µs grafana | logger=migrator t=2024-01-22T13:55:06.43813424Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-01-22T13:55:06.439416044Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.286804ms grafana | logger=migrator t=2024-01-22T13:55:06.442527245Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-01-22T13:55:06.443591123Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.064058ms grafana | logger=migrator t=2024-01-22T13:55:06.446437918Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-01-22T13:55:06.44804771Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.609532ms grafana | logger=migrator t=2024-01-22T13:55:06.451596643Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-01-22T13:55:06.452292781Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=695.948µs grafana | logger=migrator t=2024-01-22T13:55:06.455079344Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-01-22T13:55:06.462015086Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.923812ms grafana | logger=migrator t=2024-01-22T13:55:06.465049376Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-01-22T13:55:06.465726764Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=676.228µs grafana | logger=migrator t=2024-01-22T13:55:06.468596229Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-01-22T13:55:06.469245726Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=646.517µs grafana | logger=migrator t=2024-01-22T13:55:06.472045669Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-01-22T13:55:06.473375364Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.326895ms grafana | logger=migrator t=2024-01-22T13:55:06.476444305Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-01-22T13:55:06.477454541Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.004176ms grafana | logger=migrator t=2024-01-22T13:55:06.480306776Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-01-22T13:55:06.482818112Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.510386ms grafana | logger=migrator t=2024-01-22T13:55:06.485672177Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-01-22T13:55:06.488018518Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.345801ms grafana | logger=migrator t=2024-01-22T13:55:06.491054458Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-01-22T13:55:06.491090769Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=37.121µs grafana | logger=migrator t=2024-01-22T13:55:06.494080797Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-01-22T13:55:06.494376735Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=298.968µs grafana | logger=migrator t=2024-01-22T13:55:06.496733427Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-01-22T13:55:06.499435828Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.701491ms grafana | logger=migrator t=2024-01-22T13:55:06.502462077Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-01-22T13:55:06.502660272Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=198.645µs grafana | logger=migrator t=2024-01-22T13:55:06.504654555Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-01-22T13:55:06.50485086Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=193.966µs grafana | logger=migrator t=2024-01-22T13:55:06.50754646Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-01-22T13:55:06.510013315Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.462005ms grafana | logger=migrator t=2024-01-22T13:55:06.51286397Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-01-22T13:55:06.513051815Z level=info msg="Migration successfully executed" id="Update uid value" duration=187.595µs grafana | logger=migrator t=2024-01-22T13:55:06.516514026Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-01-22 13:55:12,791] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,798] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,798] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,798] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,798] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,799] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-22 13:55:12,799] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-22 13:55:12,799] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-22 13:55:12,799] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-01-22 13:55:12,801] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-01-22 13:55:12,801] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,801] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,801] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,801] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,801] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-22 13:55:12,801] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-01-22 13:55:12,812] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@55b53d44 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-01-22 13:55:12,815] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-22 13:55:12,815] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-22 13:55:12,817] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-22 13:55:12,827] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,827] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,827] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,827] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,827] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,827] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,827] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,827] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,828] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,828] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:host.name=328e2f248c5c (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-22T13:55:06.518610011Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=2.093305ms grafana | logger=migrator t=2024-01-22T13:55:06.521834505Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-01-22T13:55:06.52316742Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.332005ms grafana | logger=migrator t=2024-01-22T13:55:06.52621622Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-01-22T13:55:06.52699324Z level=info msg="Migration successfully executed" id="create api_key table" duration=776.97µs grafana | logger=migrator t=2024-01-22T13:55:06.530057761Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-01-22T13:55:06.530830981Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=772.56µs grafana | logger=migrator t=2024-01-22T13:55:06.53421089Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-01-22T13:55:06.535869613Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.660493ms grafana | logger=migrator t=2024-01-22T13:55:06.539446977Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-01-22T13:55:06.540977067Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.53305ms grafana | logger=migrator t=2024-01-22T13:55:06.546298487Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-01-22T13:55:06.548461863Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=2.719781ms grafana | logger=migrator t=2024-01-22T13:55:06.551671048Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-01-22T13:55:06.552686464Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.012026ms grafana | logger=migrator t=2024-01-22T13:55:06.555691533Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-01-22T13:55:06.55709893Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.406967ms grafana | logger=migrator t=2024-01-22T13:55:06.560057557Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-01-22T13:55:06.566549568Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.489131ms grafana | logger=migrator t=2024-01-22T13:55:06.569145866Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-01-22T13:55:06.569631979Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=483.862µs grafana | logger=migrator t=2024-01-22T13:55:06.57234545Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-01-22T13:55:06.572923305Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=575.185µs grafana | logger=migrator t=2024-01-22T13:55:06.575648486Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-01-22T13:55:06.57618499Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=534.894µs grafana | logger=migrator t=2024-01-22T13:55:06.579295072Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-01-22T13:55:06.580560705Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.268363ms grafana | logger=migrator t=2024-01-22T13:55:06.583835171Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-01-22T13:55:06.584305403Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=471.432µs grafana | logger=migrator t=2024-01-22T13:55:06.587005744Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-01-22T13:55:06.587548498Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=542.554µs grafana | logger=migrator t=2024-01-22T13:55:06.590303401Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-01-22T13:55:06.590329041Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=26.53µs grafana | logger=migrator t=2024-01-22T13:55:06.592347804Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-01-22T13:55:06.596574895Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.225661ms grafana | logger=migrator t=2024-01-22T13:55:06.599932273Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-01-22T13:55:06.602648714Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.711201ms grafana | logger=migrator t=2024-01-22T13:55:06.617753811Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-01-22T13:55:06.617965416Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=214.216µs grafana | logger=migrator t=2024-01-22T13:55:06.619917607Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-01-22T13:55:06.622574837Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.65653ms grafana | logger=migrator t=2024-01-22T13:55:06.62537489Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-01-22T13:55:06.627927857Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.551577ms grafana | logger=migrator t=2024-01-22T13:55:06.631013368Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-01-22T13:55:06.631722927Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=716.999µs grafana | logger=migrator t=2024-01-22T13:55:06.634746756Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-01-22T13:55:06.635309121Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=563.595µs grafana | logger=migrator t=2024-01-22T13:55:06.638141365Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-01-22T13:55:06.63907051Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=929.605µs grafana | logger=migrator t=2024-01-22T13:55:06.641945065Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-01-22T13:55:06.642843089Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=898.334µs grafana | logger=migrator t=2024-01-22T13:55:06.645692173Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,830] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,831] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,831] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,831] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,831] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,831] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-01-22 13:55:12,832] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,832] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,833] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-22 13:55:12,833] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 13:55:12,834] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-22 13:55:12,836] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,837] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,837] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) grafana | logger=migrator t=2024-01-22T13:55:06.64670202Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.009857ms grafana | logger=migrator t=2024-01-22T13:55:06.649608256Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-01-22T13:55:06.650717755Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.109039ms grafana | logger=migrator t=2024-01-22T13:55:06.653748515Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-01-22T13:55:06.653832937Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=85.873µs grafana | logger=migrator t=2024-01-22T13:55:06.656057735Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-01-22T13:55:06.656103556Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=47.661µs grafana | logger=migrator t=2024-01-22T13:55:06.659289Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-01-22T13:55:06.662055632Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.767012ms grafana | logger=migrator t=2024-01-22T13:55:06.66500125Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-01-22T13:55:06.667057434Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.057144ms grafana | logger=migrator t=2024-01-22T13:55:06.669741844Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-01-22T13:55:06.669790925Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=49.241µs grafana | logger=migrator t=2024-01-22T13:55:06.671879Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-01-22T13:55:06.672584649Z level=info msg="Migration successfully executed" id="create quota table v1" duration=706.359µs grafana | logger=migrator t=2024-01-22T13:55:06.675384632Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-01-22T13:55:06.676241904Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=855.572µs grafana | logger=migrator t=2024-01-22T13:55:06.679218153Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-01-22T13:55:06.679288994Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=72.222µs grafana | logger=migrator t=2024-01-22T13:55:06.682320744Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-01-22T13:55:06.683470604Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.14907ms grafana | logger=migrator t=2024-01-22T13:55:06.68673912Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-01-22T13:55:06.687637913Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=897.413µs grafana | logger=migrator t=2024-01-22T13:55:06.690912839Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-01-22T13:55:06.694433792Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.518502ms grafana | logger=migrator t=2024-01-22T13:55:06.697952744Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-01-22T13:55:06.697997395Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=43.771µs grafana | logger=migrator t=2024-01-22T13:55:06.700685786Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-01-22T13:55:06.701654471Z level=info msg="Migration successfully executed" id="create session table" duration=968.376µs grafana | logger=migrator t=2024-01-22T13:55:06.704845235Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-01-22T13:55:06.704956698Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=113.373µs grafana | logger=migrator t=2024-01-22T13:55:06.707648778Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-01-22T13:55:06.70773347Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=83.442µs grafana | logger=migrator t=2024-01-22T13:55:06.710277737Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-01-22T13:55:06.711008946Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=728.939µs grafana | logger=migrator t=2024-01-22T13:55:06.713637695Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-01-22T13:55:06.714552259Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=914.104µs grafana | logger=migrator t=2024-01-22T13:55:06.717284811Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-01-22T13:55:06.717326992Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=43.311µs grafana | logger=migrator t=2024-01-22T13:55:06.720240398Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-01-22T13:55:06.720335401Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=96.043µs grafana | logger=migrator t=2024-01-22T13:55:06.723042372Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-01-22T13:55:06.726236166Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.193004ms grafana | logger=migrator t=2024-01-22T13:55:06.73058172Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-01-22T13:55:06.734602605Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.017546ms grafana | logger=migrator t=2024-01-22T13:55:06.739819922Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-01-22T13:55:06.740208242Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=394.77µs grafana | logger=migrator t=2024-01-22T13:55:06.744039212Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-01-22T13:55:06.744359451Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=320.059µs grafana | logger=migrator t=2024-01-22T13:55:06.750545713Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-01-22T13:55:06.751598771Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.052038ms grafana | logger=migrator t=2024-01-22T13:55:06.760494904Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-01-22T13:55:06.760577476Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=86.572µs zookeeper_1 | [2024-01-22 13:55:12,837] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-22 13:55:12,837] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:12,858] INFO Logging initialized @551ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-01-22 13:55:12,951] WARN o.e.j.s.ServletContextHandler@49c90a9c{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-22 13:55:12,951] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-22 13:55:12,969] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-22 13:55:13,001] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-22 13:55:13,001] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-22 13:55:13,003] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-22 13:55:13,006] WARN ServletContext@o.e.j.s.ServletContextHandler@49c90a9c{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-01-22 13:55:13,016] INFO Started o.e.j.s.ServletContextHandler@49c90a9c{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-22 13:55:13,033] INFO Started ServerConnector@723ca036{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-01-22 13:55:13,033] INFO Started @727ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-22 13:55:13,033] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-01-22 13:55:13,038] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-22 13:55:13,039] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-22 13:55:13,040] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-22 13:55:13,041] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-22 13:55:13,064] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-22 13:55:13,065] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-22 13:55:13,066] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-22 13:55:13,066] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-22 13:55:13,071] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-01-22 13:55:13,071] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-22 13:55:13,074] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-22 13:55:13,075] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-22 13:55:13,075] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-22 13:55:13,088] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-01-22 13:55:13,087] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-01-22 13:55:13,103] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-01-22 13:55:13,104] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-01-22 13:55:16,129] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) grafana | logger=migrator t=2024-01-22T13:55:06.763651247Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-01-22T13:55:06.766968154Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.316047ms grafana | logger=migrator t=2024-01-22T13:55:06.769897791Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-01-22T13:55:06.770146447Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=247.886µs grafana | logger=migrator t=2024-01-22T13:55:06.774304026Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-01-22T13:55:06.777740666Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.43355ms grafana | logger=migrator t=2024-01-22T13:55:06.780903599Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-01-22T13:55:06.784062682Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.157853ms grafana | logger=migrator t=2024-01-22T13:55:06.787488782Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-01-22T13:55:06.787649016Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=161.364µs grafana | logger=migrator t=2024-01-22T13:55:06.791635201Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-01-22T13:55:06.792677798Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.042267ms grafana | logger=migrator t=2024-01-22T13:55:06.795925853Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-01-22T13:55:06.796834607Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=908.394µs grafana | logger=migrator t=2024-01-22T13:55:06.799956409Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-01-22T13:55:06.800971506Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.013046ms grafana | logger=migrator t=2024-01-22T13:55:06.805179366Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-01-22T13:55:06.806209553Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.029147ms grafana | logger=migrator t=2024-01-22T13:55:06.810059354Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-01-22T13:55:06.810949737Z level=info msg="Migration successfully executed" id="add index alert state" duration=888.213µs grafana | logger=migrator t=2024-01-22T13:55:06.815073115Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-01-22T13:55:06.815949568Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=876.733µs grafana | logger=migrator t=2024-01-22T13:55:06.819158733Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-01-22T13:55:06.81983215Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=672.628µs grafana | logger=migrator t=2024-01-22T13:55:06.822957032Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-01-22T13:55:06.823915297Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=958.445µs grafana | logger=migrator t=2024-01-22T13:55:06.826708181Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-01-22T13:55:06.827599154Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=890.134µs grafana | logger=migrator t=2024-01-22T13:55:06.832275737Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-01-22T13:55:06.844821956Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.546149ms grafana | logger=migrator t=2024-01-22T13:55:06.849303173Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-01-22T13:55:06.849823467Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=519.794µs grafana | logger=migrator t=2024-01-22T13:55:06.853857593Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-01-22T13:55:06.85489047Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.032268ms grafana | logger=migrator t=2024-01-22T13:55:06.858242898Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-01-22T13:55:06.858593077Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=349.63µs grafana | logger=migrator t=2024-01-22T13:55:06.862203461Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-01-22T13:55:06.862760506Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=556.875µs grafana | logger=migrator t=2024-01-22T13:55:06.865774915Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-01-22T13:55:06.866413252Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=638.357µs grafana | logger=migrator t=2024-01-22T13:55:06.870243742Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-01-22T13:55:06.873942509Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.698117ms grafana | logger=migrator t=2024-01-22T13:55:06.879436713Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-01-22T13:55:06.88313542Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.697917ms grafana | logger=migrator t=2024-01-22T13:55:06.888732097Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-01-22T13:55:06.892890706Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.160969ms grafana | logger=migrator t=2024-01-22T13:55:06.897310162Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-01-22T13:55:06.900942527Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.632205ms grafana | logger=migrator t=2024-01-22T13:55:06.903688919Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-01-22T13:55:06.90485832Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.171371ms grafana | logger=migrator t=2024-01-22T13:55:06.907866989Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-01-22T13:55:06.907996212Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=129.123µs kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-01-22 13:55:16,075] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:host.name=1b1ecfacb928 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,076] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,077] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,077] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,079] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:16,082] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-22 13:55:16,086] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-22 13:55:16,092] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:16,107] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:16,107] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:16,116] INFO Socket connection established, initiating session, client: /172.17.0.8:55306, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:16,151] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000004db890000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:16,277] INFO EventThread shut down for session: 0x1000004db890000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:16,277] INFO Session: 0x1000004db890000 closed (org.apache.zookeeper.ZooKeeper) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-01-22 13:55:16,977] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-01-22 13:55:17,298] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-22 13:55:17,370] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-01-22 13:55:17,371] INFO starting (kafka.server.KafkaServer) kafka | [2024-01-22 13:55:17,371] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-01-22 13:55:17,385] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-22 13:55:17,389] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,389] INFO Client environment:host.name=1b1ecfacb928 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,389] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,390] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,392] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@32193bea (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-22 13:55:17,396] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-22 13:55:17,401] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:17,402] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-22 13:55:17,407] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:17,413] INFO Socket connection established, initiating session, client: /172.17.0.8:55308, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:17,425] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000004db890001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-22 13:55:17,432] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-22 13:55:17,782] INFO Cluster ID = YXDHh3LaSIyP8FezJr0IvQ (kafka.server.KafkaServer) kafka | [2024-01-22 13:55:17,786] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-01-22 13:55:17,837] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.5-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] grafana | logger=migrator t=2024-01-22T13:55:06.912005217Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-01-22T13:55:06.91209347Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=87.463µs grafana | logger=migrator t=2024-01-22T13:55:06.915424377Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-01-22T13:55:06.91666517Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.240613ms grafana | logger=migrator t=2024-01-22T13:55:06.919893134Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-01-22T13:55:06.920945882Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.052208ms grafana | logger=migrator t=2024-01-22T13:55:06.924989428Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-01-22T13:55:06.925897202Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=908.154µs grafana | logger=migrator t=2024-01-22T13:55:06.929076625Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-01-22T13:55:06.929913737Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=837.142µs grafana | logger=migrator t=2024-01-22T13:55:06.93308533Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-01-22T13:55:06.934093877Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.008277ms grafana | logger=migrator t=2024-01-22T13:55:06.937619769Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-01-22T13:55:06.941515221Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.895202ms grafana | logger=migrator t=2024-01-22T13:55:06.94451332Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-01-22T13:55:06.948131405Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.619685ms grafana | logger=migrator t=2024-01-22T13:55:06.951212926Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-01-22T13:55:06.951514874Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=302.048µs grafana | logger=migrator t=2024-01-22T13:55:07.010582393Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-01-22T13:55:07.01239619Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.816427ms grafana | logger=migrator t=2024-01-22T13:55:07.017290289Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-01-22T13:55:07.018266914Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=976.655µs grafana | logger=migrator t=2024-01-22T13:55:07.021055957Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-01-22T13:55:07.02533418Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.276962ms grafana | logger=migrator t=2024-01-22T13:55:07.029190851Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-01-22T13:55:07.029342645Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=150.144µs grafana | logger=migrator t=2024-01-22T13:55:07.03258191Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-01-22T13:55:07.033480533Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=898.373µs grafana | logger=migrator t=2024-01-22T13:55:07.036325438Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-01-22T13:55:07.037289083Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=962.855µs grafana | logger=migrator t=2024-01-22T13:55:07.041154534Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-01-22T13:55:07.041496143Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=169.295µs grafana | logger=migrator t=2024-01-22T13:55:07.044706098Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-01-22T13:55:07.045588491Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=881.703µs grafana | logger=migrator t=2024-01-22T13:55:07.048546108Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-01-22T13:55:07.049443062Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=896.744µs grafana | logger=migrator t=2024-01-22T13:55:07.053236991Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-01-22T13:55:07.054145445Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=908.384µs grafana | logger=migrator t=2024-01-22T13:55:07.057155344Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-01-22T13:55:07.058069708Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=914.294µs mariadb | 2024-01-22 13:55:08+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-22 13:55:08+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-01-22 13:55:08+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-22 13:55:08+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-01-22 13:55:08 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-22 13:55:08 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-22 13:55:08 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-01-22 13:55:10+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-01-22 13:55:10+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-01-22 13:55:10+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-01-22 13:55:10 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-22 13:55:10 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-22 13:55:10 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-22 13:55:10 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-22 13:55:10 0 [Note] InnoDB: log sequence number 46574; transaction id 14 mariadb | 2024-01-22 13:55:10 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-22 13:55:10 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-22 13:55:10 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-22 13:55:10 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-22 13:55:10 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-01-22 13:55:11+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-01-22 13:55:13+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-01-22 13:55:13+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | mariadb | 2024-01-22 13:55:13+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | 2024-01-22 13:55:13+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" kafka | remote.log.metadata.manager.impl.prefix = null grafana | logger=migrator t=2024-01-22T13:55:07.061025966Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-apex-pdp | Waiting for mariadb port 3306... policy-api | Waiting for mariadb port 3306... mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' kafka | remote.log.metadata.manager.listener.name = null grafana | logger=migrator t=2024-01-22T13:55:07.062072653Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.046327ms grafana | logger=migrator t=2024-01-22T13:55:07.065897403Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-01-22T13:55:07.066940061Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.042218ms policy-apex-pdp | mariadb (172.17.0.5:3306) open policy-api | mariadb (172.17.0.5:3306) open mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.5:3306) open kafka | remote.log.reader.max.pending.tasks = 100 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json grafana | logger=migrator t=2024-01-22T13:55:07.069676182Z level=info msg="Executing migration" id="Update annotation table charset" policy-apex-pdp | Waiting for kafka port 9092... policy-api | Waiting for policy-db-migrator port 6824... mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d policy-pap | Waiting for kafka port 9092... kafka | remote.log.reader.threads = 10 simulator | overriding logback.xml policy-db-migrator | Waiting for mariadb port 3306... grafana | logger=migrator t=2024-01-22T13:55:07.069777925Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=101.203µs policy-apex-pdp | kafka (172.17.0.8:9092) open policy-api | policy-db-migrator (172.17.0.7:6824) open mariadb | prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" policy-pap | kafka (172.17.0.8:9092) open kafka | remote.log.storage.manager.class.name = null simulator | 2024-01-22 13:55:07,470 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-01-22T13:55:07.071803258Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml mariadb | 2024-01-22 13:55:14+00:00 [Note] [Entrypoint]: Stopping temporary server prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" policy-pap | Waiting for api port 6969... kafka | remote.log.storage.manager.class.path = null simulator | 2024-01-22 13:55:07,556 INFO org.onap.policy.models.simulators starting grafana | logger=migrator t=2024-01-22T13:55:07.075869415Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.065367ms grafana | logger=migrator t=2024-01-22T13:55:07.079422938Z level=info msg="Executing migration" id="Drop category_id index" policy-apex-pdp | Waiting for pap port 6969... policy-api | mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd (initiated by: unknown): Normal shutdown prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-pap | api (172.17.0.9:6969) open kafka | remote.log.storage.manager.impl.prefix = null simulator | 2024-01-22 13:55:07,556 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties grafana | logger=migrator t=2024-01-22T13:55:07.080360023Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=935.905µs grafana | logger=migrator t=2024-01-22T13:55:07.083050913Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-apex-pdp | pap (172.17.0.10:6969) open policy-api | . ____ _ __ _ _ mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: FTS optimize thread exiting. prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml kafka | remote.log.storage.system.enable = false simulator | 2024-01-22 13:55:07,784 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION grafana | logger=migrator t=2024-01-22T13:55:07.086952105Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.900972ms grafana | logger=migrator t=2024-01-22T13:55:07.133236759Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-01-22T13:55:49.256+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Starting shutdown... prometheus | ts=2024-01-22T13:55:04.882Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json kafka | replica.fetch.backoff.ms = 1000 simulator | 2024-01-22 13:55:07,785 INFO org.onap.policy.models.simulators starting A&AI simulator grafana | logger=migrator t=2024-01-22T13:55:07.134574764Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.340335ms grafana | logger=migrator t=2024-01-22T13:55:07.13898171Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-apex-pdp | [2024-01-22T13:55:49.501+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool prometheus | ts=2024-01-22T13:55:04.883Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 policy-pap | kafka | replica.fetch.max.bytes = 1048576 simulator | 2024-01-22 13:55:07,883 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-01-22T13:55:07.141189798Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=2.208738ms policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-apex-pdp | allow.auto.create.topics = true mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Buffer pool(s) dump completed at 240122 13:55:14 prometheus | ts=2024-01-22T13:55:04.884Z caller=main.go:1039 level=info msg="Starting TSDB ..." policy-pap | . ____ _ __ _ _ kafka | replica.fetch.min.bytes = 1 simulator | 2024-01-22 13:55:07,894 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-01-22T13:55:07.144965227Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-apex-pdp | auto.commit.interval.ms = 5000 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" prometheus | ts=2024-01-22T13:55:04.887Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ kafka | replica.fetch.response.max.bytes = 10485760 simulator | 2024-01-22 13:55:07,897 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-01-22T13:55:07.145910542Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=945.295µs policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-apex-pdp | auto.include.jmx.reporter = true mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Shutdown completed; log sequence number 347209; transaction id 298 prometheus | ts=2024-01-22T13:55:04.887Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ kafka | replica.fetch.wait.max.ms = 500 simulator | 2024-01-22 13:55:07,900 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-01-22T13:55:07.149606109Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-api | =========|_|==============|___/=/_/_/_/ policy-apex-pdp | auto.offset.reset = latest mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd: Shutdown complete prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) kafka | replica.high.watermark.checkpoint.interval.ms = 5000 simulator | 2024-01-22 13:55:07,961 INFO Session workerName=node0 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! grafana | logger=migrator t=2024-01-22T13:55:07.165006533Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.396073ms policy-api | :: Spring Boot :: (v3.1.4) policy-apex-pdp | bootstrap.servers = [kafka:9092] mariadb | prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.19µs policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / kafka | replica.lag.time.max.ms = 30000 simulator | 2024-01-22 13:55:08,567 INFO Using GSON for REST calls policy-db-migrator | 321 blocks grafana | logger=migrator t=2024-01-22T13:55:07.169149981Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-api | policy-apex-pdp | check.crcs = true mariadb | 2024-01-22 13:55:14+00:00 [Note] [Entrypoint]: Temporary server stopped prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-pap | =========|_|==============|___/=/_/_/_/ kafka | replica.selector.class = null simulator | 2024-01-22 13:55:08,649 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} policy-db-migrator | Preparing upgrade release version: 0800 grafana | logger=migrator t=2024-01-22T13:55:07.169739667Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=589.726µs policy-api | [2024-01-22T13:55:24.406+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-apex-pdp | client.dns.lookup = use_all_dns_ips mariadb | prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-pap | :: Spring Boot :: (v3.1.7) kafka | replica.socket.receive.buffer.bytes = 65536 simulator | 2024-01-22 13:55:08,657 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-db-migrator | Preparing upgrade release version: 0900 grafana | logger=migrator t=2024-01-22T13:55:07.173031103Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-api | [2024-01-22T13:55:24.408+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-apex-pdp | client.id = consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-1 mariadb | 2024-01-22 13:55:14+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. prometheus | ts=2024-01-22T13:55:04.893Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=27.442µs wal_replay_duration=540.474µs wbl_replay_duration=180ns total_replay_duration=613.058µs policy-pap | kafka | replica.socket.timeout.ms = 30000 simulator | 2024-01-22 13:55:08,664 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1783ms policy-db-migrator | Preparing upgrade release version: 1000 grafana | logger=migrator t=2024-01-22T13:55:07.173991588Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=960.265µs policy-api | [2024-01-22T13:55:26.223+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-apex-pdp | client.rack = mariadb | prometheus | ts=2024-01-22T13:55:04.896Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC policy-pap | [2024-01-22T13:55:38.094+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) kafka | replication.quota.window.num = 11 simulator | 2024-01-22 13:55:08,664 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4233 ms. policy-db-migrator | Preparing upgrade release version: 1100 grafana | logger=migrator t=2024-01-22T13:55:07.176787871Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-api | [2024-01-22T13:55:26.320+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 80 ms. Found 6 JPA repository interfaces. policy-apex-pdp | connections.max.idle.ms = 540000 mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... prometheus | ts=2024-01-22T13:55:04.896Z caller=main.go:1063 level=info msg="TSDB started" policy-pap | [2024-01-22T13:55:38.095+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" kafka | replication.quota.window.size.seconds = 1 simulator | 2024-01-22 13:55:08,669 INFO org.onap.policy.models.simulators starting SDNC simulator policy-db-migrator | Preparing upgrade release version: 1200 grafana | logger=migrator t=2024-01-22T13:55:07.177143701Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=351.4µs policy-api | [2024-01-22T13:55:26.719+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-apex-pdp | default.api.timeout.ms = 60000 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 prometheus | ts=2024-01-22T13:55:04.896Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-pap | [2024-01-22T13:55:40.118+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. kafka | request.timeout.ms = 30000 simulator | 2024-01-22 13:55:08,672 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | Preparing upgrade release version: 1300 grafana | logger=migrator t=2024-01-22T13:55:07.180814087Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-api | [2024-01-22T13:55:26.720+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-apex-pdp | enable.auto.commit = true mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Number of transaction pools: 1 prometheus | ts=2024-01-22T13:55:04.898Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.712397ms db_storage=2.24µs remote_storage=2.941µs web_handler=890ns query_engine=1.81µs scrape=321.234µs scrape_sd=162.528µs notify=38.632µs notify_sd=15.5µs rules=3.42µs tracing=10.51µs policy-pap | [2024-01-22T13:55:40.249+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 119 ms. Found 7 JPA repository interfaces. kafka | reserved.broker.max.id = 1000 simulator | 2024-01-22 13:55:08,672 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | Done grafana | logger=migrator t=2024-01-22T13:55:07.181439773Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=624.886µs policy-api | [2024-01-22T13:55:27.411+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-apex-pdp | exclude.internal.topics = true mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions prometheus | ts=2024-01-22T13:55:04.898Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." policy-pap | [2024-01-22T13:55:40.696+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler kafka | sasl.client.callback.handler.class = null simulator | 2024-01-22 13:55:08,673 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | name version grafana | logger=migrator t=2024-01-22T13:55:07.184521604Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-api | [2024-01-22T13:55:27.420+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-apex-pdp | fetch.max.bytes = 52428800 mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) prometheus | ts=2024-01-22T13:55:04.898Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." policy-pap | [2024-01-22T13:55:40.696+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler kafka | sasl.enabled.mechanisms = [GSSAPI] simulator | 2024-01-22 13:55:08,675 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-db-migrator | policyadmin 0 grafana | logger=migrator t=2024-01-22T13:55:07.184777311Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=255.557µs policy-api | [2024-01-22T13:55:27.422+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-apex-pdp | fetch.max.wait.ms = 500 mariadb | 2024-01-22 13:55:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-pap | [2024-01-22T13:55:41.444+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) kafka | sasl.jaas.config = null simulator | 2024-01-22 13:55:08,680 INFO Session workerName=node0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 grafana | logger=migrator t=2024-01-22T13:55:07.187734979Z level=info msg="Executing migration" id="Add created time to annotation table" policy-api | [2024-01-22T13:55:27.422+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-apex-pdp | fetch.min.bytes = 1 mariadb | 2024-01-22 13:55:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-pap | [2024-01-22T13:55:41.455+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit simulator | 2024-01-22 13:55:08,836 INFO Using GSON for REST calls policy-db-migrator | upgrade: 0 -> 1300 grafana | logger=migrator t=2024-01-22T13:55:07.191902878Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.167569ms policy-api | [2024-01-22T13:55:27.513+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-apex-pdp | group.id = e65163a7-0954-4bf8-9924-8c41fa40f9af mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-pap | [2024-01-22T13:55:41.458+00:00|INFO|StandardService|main] Starting service [Tomcat] kafka | sasl.kerberos.min.time.before.relogin = 60000 simulator | 2024-01-22 13:55:08,852 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:07.196073327Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-api | [2024-01-22T13:55:27.513+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3028 ms policy-apex-pdp | group.instance.id = null mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Completed initialization of buffer pool policy-pap | [2024-01-22T13:55:41.458+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] simulator | 2024-01-22 13:55:08,853 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql grafana | logger=migrator t=2024-01-22T13:55:07.200694498Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.620091ms policy-api | [2024-01-22T13:55:28.106+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-apex-pdp | heartbeat.interval.ms = 3000 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-pap | [2024-01-22T13:55:41.554+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext kafka | sasl.kerberos.service.name = null simulator | 2024-01-22 13:55:08,853 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1972ms policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.203886812Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-api | [2024-01-22T13:55:28.181+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-apex-pdp | interceptor.classes = [] mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: 128 rollback segments are active. policy-pap | [2024-01-22T13:55:41.554+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3373 ms kafka | sasl.kerberos.ticket.renew.jitter = 0.05 simulator | 2024-01-22 13:55:08,853 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4820 ms. grafana | logger=migrator t=2024-01-22T13:55:07.205094264Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.207142ms policy-api | [2024-01-22T13:55:28.184+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-apex-pdp | internal.leave.group.on.close = true mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-pap | [2024-01-22T13:55:42.014+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 simulator | 2024-01-22 13:55:08,855 INFO org.onap.policy.models.simulators starting SO simulator grafana | logger=migrator t=2024-01-22T13:55:07.208426471Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-api | [2024-01-22T13:55:28.235+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-pap | [2024-01-22T13:55:42.102+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 kafka | sasl.login.callback.handler.class = null simulator | 2024-01-22 13:55:08,859 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-api | [2024-01-22T13:55:28.596+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-apex-pdp | isolation.level = read_uncommitted mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: log sequence number 347209; transaction id 299 policy-pap | [2024-01-22T13:55:42.105+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer kafka | sasl.login.class = null grafana | logger=migrator t=2024-01-22T13:55:07.209113459Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=686.938µs simulator | 2024-01-22 13:55:08,860 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-api | [2024-01-22T13:55:28.617+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer mariadb | 2024-01-22 13:55:14 0 [Note] Plugin 'FEEDBACK' is disabled. policy-pap | [2024-01-22T13:55:42.147+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled kafka | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-22T13:55:07.213486744Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) simulator | 2024-01-22 13:55:08,863 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-pap | [2024-01-22T13:55:42.488+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer kafka | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-22T13:55:07.213799322Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=305.328µs policy-db-migrator | -------------- simulator | 2024-01-22 13:55:08,864 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-api | [2024-01-22T13:55:28.716+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 policy-api | [2024-01-22T13:55:28.718+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-01-22T13:55:42.507+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... kafka | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-22T13:55:07.21602079Z level=info msg="Executing migration" id="Add epoch_end column" policy-db-migrator | simulator | 2024-01-22 13:55:08,868 INFO Session workerName=node0 mariadb | 2024-01-22 13:55:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-api | [2024-01-22T13:55:28.749+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-pap | [2024-01-22T13:55:42.638+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@288ca5f0 kafka | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-22T13:55:07.220789035Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.766215ms policy-db-migrator | simulator | 2024-01-22 13:55:08,941 INFO Using GSON for REST calls mariadb | 2024-01-22 13:55:14 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-apex-pdp | max.poll.interval.ms = 300000 policy-api | [2024-01-22T13:55:28.751+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-pap | [2024-01-22T13:55:42.641+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. kafka | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-22T13:55:07.224133923Z level=info msg="Executing migration" id="Add index for epoch_end" policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql simulator | 2024-01-22 13:55:08,954 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} mariadb | 2024-01-22 13:55:14 0 [Note] Server socket created on IP: '0.0.0.0'. policy-apex-pdp | max.poll.records = 500 policy-api | [2024-01-22T13:55:30.670+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2024-01-22T13:55:42.669+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) kafka | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-22T13:55:07.225280213Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.14565ms policy-db-migrator | -------------- simulator | 2024-01-22 13:55:08,955 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} mariadb | 2024-01-22 13:55:14 0 [Note] Server socket created on IP: '::'. policy-apex-pdp | metadata.max.age.ms = 300000 policy-api | [2024-01-22T13:55:30.674+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-01-22T13:55:42.670+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead kafka | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:07.22898971Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) simulator | 2024-01-22 13:55:08,955 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @2074ms mariadb | 2024-01-22 13:55:14 0 [Note] mariadbd: ready for connections. policy-apex-pdp | metric.reporters = [] policy-api | [2024-01-22T13:55:32.022+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-pap | [2024-01-22T13:55:44.682+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) kafka | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-22T13:55:07.229463873Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=477.563µs policy-db-migrator | -------------- simulator | 2024-01-22 13:55:08,955 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4908 ms. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-apex-pdp | metrics.num.samples = 2 policy-api | [2024-01-22T13:55:32.938+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-pap | [2024-01-22T13:55:44.686+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' kafka | sasl.mechanism.controller.protocol = GSSAPI grafana | logger=migrator t=2024-01-22T13:55:07.234417983Z level=info msg="Executing migration" id="Move region to single row" policy-db-migrator | simulator | 2024-01-22 13:55:08,956 INFO org.onap.policy.models.simulators starting VFC simulator mariadb | 2024-01-22 13:55:14 0 [Note] InnoDB: Buffer pool(s) load completed at 240122 13:55:14 policy-apex-pdp | metrics.recording.level = INFO policy-api | [2024-01-22T13:55:34.147+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-01-22T13:55:45.254+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository kafka | sasl.mechanism.inter.broker.protocol = GSSAPI grafana | logger=migrator t=2024-01-22T13:55:07.235306146Z level=info msg="Migration successfully executed" id="Move region to single row" duration=891.693µs policy-db-migrator | simulator | 2024-01-22 13:55:08,959 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START mariadb | 2024-01-22 13:55:15 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) policy-apex-pdp | metrics.sample.window.ms = 30000 policy-api | [2024-01-22T13:55:34.358+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@58a01e47, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6149184e, org.springframework.security.web.context.SecurityContextHolderFilter@234a08ea, org.springframework.security.web.header.HeaderWriterFilter@2e26841f, org.springframework.security.web.authentication.logout.LogoutFilter@c7a7d3, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3413effc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56d3e4a9, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2542d320, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6f3a8d5e, org.springframework.security.web.access.ExceptionTranslationFilter@19bd1f98, org.springframework.security.web.access.intercept.AuthorizationFilter@729f8c5d] policy-pap | [2024-01-22T13:55:45.876+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository kafka | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-22T13:55:07.239047694Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" simulator | 2024-01-22 13:55:08,959 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | 2024-01-22 13:55:15 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-api | [2024-01-22T13:55:35.184+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-01-22T13:55:45.981+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository kafka | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-22T13:55:07.240646596Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.601132ms simulator | 2024-01-22 13:55:08,960 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | 2024-01-22 13:55:15 21 [Warning] Aborted connection 21 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-apex-pdp | receive.buffer.bytes = 65536 policy-api | [2024-01-22T13:55:35.237+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-01-22T13:55:46.268+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-22T13:55:07.244339503Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql simulator | 2024-01-22 13:55:08,961 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 mariadb | 2024-01-22 13:55:15 25 [Warning] Aborted connection 25 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-api | [2024-01-22T13:55:35.274+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-pap | allow.auto.create.topics = true kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-22T13:55:07.245477143Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.13852ms policy-db-migrator | -------------- simulator | 2024-01-22 13:55:08,974 INFO Session workerName=node0 policy-apex-pdp | reconnect.backoff.ms = 50 policy-api | [2024-01-22T13:55:35.292+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.617 seconds (process running for 12.242) policy-pap | auto.commit.interval.ms = 5000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:07.249483188Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) simulator | 2024-01-22 13:55:09,075 INFO Using GSON for REST calls policy-apex-pdp | request.timeout.ms = 30000 policy-api | [2024-01-22T13:55:39.938+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | auto.include.jmx.reporter = true kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-22T13:55:07.250576047Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.092429ms policy-db-migrator | -------------- simulator | 2024-01-22 13:55:09,086 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} policy-apex-pdp | retry.backoff.ms = 100 policy-api | [2024-01-22T13:55:39.938+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-pap | auto.offset.reset = latest kafka | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-22T13:55:07.253533404Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-db-migrator | simulator | 2024-01-22 13:55:09,091 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} policy-apex-pdp | sasl.client.callback.handler.class = null policy-api | [2024-01-22T13:55:39.940+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-pap | bootstrap.servers = [kafka:9092] kafka | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-22T13:55:07.254481369Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=947.385µs policy-db-migrator | simulator | 2024-01-22 13:55:09,092 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @2211ms policy-apex-pdp | sasl.jaas.config = null policy-api | [2024-01-22T13:55:53.303+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-4] ***** OrderedServiceImpl implementers: policy-pap | check.crcs = true kafka | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-22T13:55:07.257724124Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql simulator | 2024-01-22 13:55:09,093 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4867 ms. policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-api | [] policy-pap | client.dns.lookup = use_all_dns_ips kafka | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-22T13:55:07.258636518Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=912.154µs policy-db-migrator | -------------- simulator | 2024-01-22 13:55:09,096 INFO org.onap.policy.models.simulators started policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | client.id = consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-1 kafka | sasl.server.callback.handler.class = null grafana | logger=migrator t=2024-01-22T13:55:07.262373276Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | client.rack = kafka | sasl.server.max.receive.size = 524288 grafana | logger=migrator t=2024-01-22T13:55:07.263330201Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=953.615µs policy-db-migrator | -------------- policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | connections.max.idle.ms = 540000 kafka | security.inter.broker.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-22T13:55:07.2667247Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-db-migrator | policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null kafka | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:07.266873564Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=148.184µs policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null kafka | server.max.startup.time.ms = 9223372036854775807 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql grafana | logger=migrator t=2024-01-22T13:55:07.270248293Z level=info msg="Executing migration" id="create test_data table" policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 kafka | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.271210898Z level=info msg="Migration successfully executed" id="create test_data table" duration=962.176µs policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 kafka | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) grafana | logger=migrator t=2024-01-22T13:55:07.275257004Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | socket.listen.backlog.size = 50 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.276059655Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=802.511µs policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI kafka | socket.receive.buffer.bytes = 102400 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:07.28845451Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | socket.request.max.bytes = 104857600 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:07.289533708Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.080298ms policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | socket.send.buffer.bytes = 102400 grafana | logger=migrator t=2024-01-22T13:55:07.292350632Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | ssl.cipher.suites = [] policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql grafana | logger=migrator t=2024-01-22T13:55:07.293352088Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.001516ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | ssl.client.auth = none policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.297769474Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) grafana | logger=migrator t=2024-01-22T13:55:07.298334199Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=565.685µs policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null kafka | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.301629215Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 kafka | ssl.engine.factory.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:07.302399746Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=773.041µs policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | ssl.key.password = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:07.305895277Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql grafana | logger=migrator t=2024-01-22T13:55:07.306171455Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=275.977µs policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null kafka | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.309566214Z level=info msg="Executing migration" id="create team table" policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-22T13:55:07.310441087Z level=info msg="Migration successfully executed" id="create team table" duration=874.992µs policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null kafka | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.314199325Z level=info msg="Executing migration" id="add index team.org_id" policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null kafka | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:07.315276293Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.080838ms policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:07.318006825Z level=info msg="Executing migration" id="add unique index team_org_id_name" policy-apex-pdp | ssl.provider = null policy-pap | default.api.timeout.ms = 60000 kafka | ssl.principal.mapping.rules = DEFAULT grafana | logger=migrator t=2024-01-22T13:55:07.319036862Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.029407ms policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true kafka | ssl.protocol = TLSv1.3 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:07.322985076Z level=info msg="Executing migration" id="Add column uid in team" policy-pap | fetch.max.bytes = 52428800 kafka | ssl.provider = null policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql grafana | logger=migrator t=2024-01-22T13:55:07.327850603Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.865138ms policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | fetch.max.wait.ms = 500 kafka | ssl.secure.random.implementation = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.331186871Z level=info msg="Executing migration" id="Update uid column values in team" policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | fetch.min.bytes = 1 kafka | ssl.trustmanager.algorithm = PKIX policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) grafana | logger=migrator t=2024-01-22T13:55:07.331461218Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=273.557µs policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null kafka | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-22T13:55:07.334706983Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-apex-pdp | ssl.truststore.password = null policy-pap | group.id = 79c954dd-4645-472b-b928-ee2d4186f7c1 kafka | ssl.truststore.location = null grafana | logger=migrator t=2024-01-22T13:55:07.33574139Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.034087ms policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- kafka | ssl.truststore.password = null grafana | logger=migrator t=2024-01-22T13:55:07.449474473Z level=info msg="Executing migration" id="create team member table" policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-db-migrator | kafka | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:07.450348876Z level=info msg="Migration successfully executed" id="create team member table" duration=878.113µs policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-db-migrator | kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:07.45469954Z level=info msg="Executing migration" id="add index team_member.org_id" policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql kafka | transaction.max.timeout.ms = 900000 grafana | logger=migrator t=2024-01-22T13:55:07.456360143Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.661293ms policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-db-migrator | -------------- kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 grafana | logger=migrator t=2024-01-22T13:55:07.459541787Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | transaction.state.log.load.buffer.size = 5242880 grafana | logger=migrator t=2024-01-22T13:55:07.460544813Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.002896ms policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO kafka | transaction.state.log.min.isr = 2 grafana | logger=migrator t=2024-01-22T13:55:07.463612204Z level=info msg="Executing migration" id="add index team_member.team_id" policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- kafka | transaction.state.log.num.partitions = 50 grafana | logger=migrator t=2024-01-22T13:55:07.464589759Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=977.696µs policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | kafka | transaction.state.log.replication.factor = 3 grafana | logger=migrator t=2024-01-22T13:55:07.468360408Z level=info msg="Executing migration" id="Add column email to team table" policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-db-migrator | kafka | transaction.state.log.segment.bytes = 104857600 grafana | logger=migrator t=2024-01-22T13:55:07.473163264Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.802856ms policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql kafka | transactional.id.expiration.ms = 604800000 grafana | logger=migrator t=2024-01-22T13:55:07.476536152Z level=info msg="Executing migration" id="Add column external to team_member table" policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | unclean.leader.election.enable = false grafana | logger=migrator t=2024-01-22T13:55:07.481259196Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.722364ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | unstable.api.versions.enable = false grafana | logger=migrator t=2024-01-22T13:55:07.484023739Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- kafka | zookeeper.clientCnxnSocket = null grafana | logger=migrator t=2024-01-22T13:55:07.488578238Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.553679ms policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-db-migrator | kafka | zookeeper.connect = zookeeper:2181 grafana | logger=migrator t=2024-01-22T13:55:07.492529092Z level=info msg="Executing migration" id="create dashboard acl table" policy-pap | sasl.login.connect.timeout.ms = null kafka | zookeeper.connection.timeout.ms = null grafana | logger=migrator t=2024-01-22T13:55:07.493431516Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=902.334µs policy-apex-pdp | ssl.truststore.type = JKS kafka | zookeeper.max.in.flight.requests = 10 grafana | logger=migrator t=2024-01-22T13:55:07.496341471Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | zookeeper.metadata.migration.enable = false grafana | logger=migrator t=2024-01-22T13:55:07.497321197Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=979.366µs policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-apex-pdp | kafka | zookeeper.session.timeout.ms = 18000 grafana | logger=migrator t=2024-01-22T13:55:07.500198832Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-22T13:55:49.685+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | zookeeper.set.acl = false grafana | logger=migrator t=2024-01-22T13:55:07.50127534Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.075998ms policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-22T13:55:49.685+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | zookeeper.ssl.cipher.suites = null grafana | logger=migrator t=2024-01-22T13:55:07.505007688Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-22T13:55:49.685+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931749683 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 kafka | zookeeper.ssl.client.enable = false grafana | logger=migrator t=2024-01-22T13:55:07.505999024Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=990.816µs policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | zookeeper.ssl.crl.enable = false grafana | logger=migrator t=2024-01-22T13:55:07.511038386Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-db-migrator | policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-01-22T13:55:07.512018392Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=979.816µs policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-01-22T13:55:07.51575356Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | zookeeper.ssl.keystore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-22T13:55:07.516723555Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=969.985µs policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | zookeeper.ssl.keystore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.519693033Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-db-migrator | kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-01-22T13:55:07.520686269Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=992.026µs policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-db-migrator | kafka | zookeeper.ssl.ocsp.enable = false grafana | logger=migrator t=2024-01-22T13:55:07.5245162Z level=info msg="Executing migration" id="add index dashboard_permission" policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql kafka | zookeeper.ssl.protocol = TLSv1.2 grafana | logger=migrator t=2024-01-22T13:55:07.525705261Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.189441ms policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- kafka | zookeeper.ssl.truststore.location = null grafana | logger=migrator t=2024-01-22T13:55:07.528630828Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | zookeeper.ssl.truststore.password = null grafana | logger=migrator t=2024-01-22T13:55:07.529303235Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=671.907µs policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-db-migrator | -------------- kafka | zookeeper.ssl.truststore.type = null grafana | logger=migrator t=2024-01-22T13:55:07.533106515Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | kafka | (kafka.server.KafkaConfig) grafana | logger=migrator t=2024-01-22T13:55:07.533402383Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=295.488µs policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-db-migrator | kafka | [2024-01-22 13:55:17,869] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-01-22T13:55:07.536747971Z level=info msg="Executing migration" id="create tag table" policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql kafka | [2024-01-22 13:55:17,870] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-01-22T13:55:07.537980943Z level=info msg="Migration successfully executed" id="create tag table" duration=1.229613ms policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:17,872] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-01-22T13:55:07.54245833Z level=info msg="Executing migration" id="add index tag.key_value" policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | [2024-01-22T13:55:49.688+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-1, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | [2024-01-22 13:55:17,877] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-01-22T13:55:07.543892108Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.434618ms policy-apex-pdp | [2024-01-22T13:55:49.703+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-01-22T13:55:49.704+00:00|INFO|ServiceManager|main] service manager starting topics policy-db-migrator | -------------- kafka | [2024-01-22 13:55:17,912] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) grafana | logger=migrator t=2024-01-22T13:55:07.547332128Z level=info msg="Executing migration" id="create login attempt table" policy-apex-pdp | [2024-01-22T13:55:49.711+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e65163a7-0954-4bf8-9924-8c41fa40f9af, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-01-22T13:55:49.736+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | kafka | [2024-01-22 13:55:17,919] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) grafana | logger=migrator t=2024-01-22T13:55:07.548091628Z level=info msg="Migration successfully executed" id="create login attempt table" duration=759.26µs policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-db-migrator | kafka | [2024-01-22 13:55:17,929] INFO Loaded 0 logs in 17ms (kafka.log.LogManager) grafana | logger=migrator t=2024-01-22T13:55:07.551878357Z level=info msg="Executing migration" id="add index login_attempt.username" policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql kafka | [2024-01-22 13:55:17,931] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) grafana | logger=migrator t=2024-01-22T13:55:07.552874273Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=998.696µs policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-db-migrator | -------------- kafka | [2024-01-22 13:55:17,934] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) grafana | logger=migrator t=2024-01-22T13:55:07.557171266Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-01-22 13:55:17,955] INFO Starting the log cleaner (kafka.log.LogCleaner) grafana | logger=migrator t=2024-01-22T13:55:07.558154612Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=983.176µs policy-apex-pdp | client.rack = policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- kafka | [2024-01-22 13:55:18,000] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) grafana | logger=migrator t=2024-01-22T13:55:07.562701001Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | ssl.truststore.certificates = null policy-db-migrator | kafka | [2024-01-22 13:55:18,016] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) grafana | logger=migrator t=2024-01-22T13:55:07.579975524Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.270263ms policy-apex-pdp | default.api.timeout.ms = 60000 policy-pap | ssl.truststore.location = null policy-db-migrator | kafka | [2024-01-22 13:55:18,029] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-01-22T13:55:07.58363585Z level=info msg="Executing migration" id="create login_attempt v2" policy-apex-pdp | enable.auto.commit = true policy-pap | ssl.truststore.password = null policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql kafka | [2024-01-22 13:55:18,086] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-22T13:55:07.584276947Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=641.257µs policy-apex-pdp | exclude.internal.topics = true policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- kafka | [2024-01-22 13:55:18,423] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-01-22T13:55:07.588168229Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-apex-pdp | fetch.max.bytes = 52428800 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | [2024-01-22 13:55:18,446] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) grafana | logger=migrator t=2024-01-22T13:55:07.589303769Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.13171ms policy-apex-pdp | fetch.max.wait.ms = 500 policy-pap | policy-db-migrator | -------------- kafka | [2024-01-22 13:55:18,447] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-01-22T13:55:07.593210011Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-apex-pdp | fetch.min.bytes = 1 policy-pap | [2024-01-22T13:55:46.446+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | kafka | [2024-01-22 13:55:18,453] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) grafana | logger=migrator t=2024-01-22T13:55:07.593808157Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=597.596µs policy-apex-pdp | group.id = e65163a7-0954-4bf8-9924-8c41fa40f9af policy-pap | [2024-01-22T13:55:46.446+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | kafka | [2024-01-22 13:55:18,457] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-22T13:55:07.597557215Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-apex-pdp | group.instance.id = null policy-pap | [2024-01-22T13:55:46.446+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931746444 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql kafka | [2024-01-22 13:55:18,476] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-22T13:55:07.598295525Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=737.64µs policy-apex-pdp | heartbeat.interval.ms = 3000 policy-pap | [2024-01-22T13:55:46.449+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-1, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- kafka | [2024-01-22 13:55:18,480] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-22T13:55:07.601993242Z level=info msg="Executing migration" id="create user auth table" policy-apex-pdp | interceptor.classes = [] policy-pap | [2024-01-22T13:55:46.449+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-01-22 13:55:18,479] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-22T13:55:07.602802223Z level=info msg="Migration successfully executed" id="create user auth table" duration=807.751µs policy-apex-pdp | internal.leave.group.on.close = true policy-pap | allow.auto.create.topics = true policy-db-migrator | -------------- kafka | [2024-01-22 13:55:18,481] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-22T13:55:07.605994727Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | kafka | [2024-01-22 13:55:18,496] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) grafana | logger=migrator t=2024-01-22T13:55:07.607052284Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.057257ms policy-apex-pdp | isolation.level = read_uncommitted policy-pap | auto.include.jmx.reporter = true policy-db-migrator | kafka | [2024-01-22 13:55:18,524] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-01-22T13:55:07.610176866Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-pap | auto.offset.reset = latest policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-22 13:55:18,598] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705931718568,1705931718568,1,0,0,72057614900985857,258,0,27 grafana | logger=migrator t=2024-01-22T13:55:07.61031808Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=140.784µs policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | max.partition.fetch.bytes = 1048576 kafka | (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-01-22T13:55:07.614822258Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-db-migrator | -------------- policy-pap | check.crcs = true kafka | [2024-01-22 13:55:18,599] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-01-22T13:55:07.619998374Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.175406ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | max.poll.interval.ms = 300000 kafka | [2024-01-22 13:55:18,860] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) grafana | logger=migrator t=2024-01-22T13:55:07.623204028Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-db-migrator | -------------- policy-pap | client.id = consumer-policy-pap-2 policy-apex-pdp | max.poll.records = 500 kafka | [2024-01-22 13:55:18,868] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-22T13:55:07.628292491Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.087583ms policy-db-migrator | policy-pap | client.rack = policy-apex-pdp | metadata.max.age.ms = 300000 kafka | [2024-01-22 13:55:18,875] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-22T13:55:07.631823354Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | metric.reporters = [] kafka | [2024-01-22 13:55:18,875] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-22T13:55:07.636919418Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.096124ms policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | metrics.num.samples = 2 kafka | [2024-01-22 13:55:18,898] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-01-22T13:55:07.641342314Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-db-migrator | -------------- policy-pap | enable.auto.commit = true policy-apex-pdp | metrics.recording.level = INFO kafka | [2024-01-22 13:55:18,906] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-22T13:55:07.646410576Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.067712ms policy-pap | exclude.internal.topics = true policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-01-22 13:55:18,911] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-22T13:55:07.650243757Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- kafka | [2024-01-22 13:55:18,913] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-22T13:55:07.651177832Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=934.114µs policy-pap | fetch.max.wait.ms = 500 policy-apex-pdp | receive.buffer.bytes = 65536 policy-db-migrator | kafka | [2024-01-22 13:55:18,915] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-22T13:55:07.654998102Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-pap | fetch.min.bytes = 1 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | kafka | [2024-01-22 13:55:18,932] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-01-22T13:55:07.66408046Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=9.078188ms policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql kafka | [2024-01-22 13:55:19,015] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-01-22T13:55:07.669510602Z level=info msg="Executing migration" id="create server_lock table" policy-pap | group.id = policy-pap policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:19,017] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) grafana | logger=migrator t=2024-01-22T13:55:07.670388345Z level=info msg="Migration successfully executed" id="create server_lock table" duration=877.273µs policy-pap | group.instance.id = null policy-apex-pdp | retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | [2024-01-22 13:55:19,018] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-01-22 13:55:19,055] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) policy-pap | heartbeat.interval.ms = 3000 policy-apex-pdp | sasl.client.callback.handler.class = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:19,057] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-22T13:55:07.673825855Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-apex-pdp | sasl.jaas.config = null policy-db-migrator | policy-pap | interceptor.classes = [] kafka | [2024-01-22 13:55:19,062] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-22T13:55:07.674811231Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=985.996µs policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | policy-pap | internal.leave.group.on.close = true kafka | [2024-01-22 13:55:19,073] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-22T13:55:07.678365195Z level=info msg="Executing migration" id="create user auth token table" policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-01-22 13:55:19,077] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-22T13:55:07.679226377Z level=info msg="Migration successfully executed" id="create user auth token table" duration=860.453µs policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted kafka | [2024-01-22 13:55:19,081] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-22T13:55:07.682795921Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-22 13:55:19,091] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-apex-pdp | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-22T13:55:07.68391961Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.124029ms policy-db-migrator | -------------- policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-01-22 13:55:19,100] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-apex-pdp | sasl.login.class = null grafana | logger=migrator t=2024-01-22T13:55:07.688469299Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 kafka | [2024-01-22 13:55:19,104] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-22T13:55:07.689512637Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.043428ms policy-db-migrator | policy-pap | max.poll.records = 500 kafka | [2024-01-22 13:55:19,109] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-22T13:55:07.693284766Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-22 13:55:19,119] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-22T13:55:07.694334043Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.049797ms policy-db-migrator | -------------- policy-pap | metric.reporters = [] kafka | [2024-01-22 13:55:19,123] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-22T13:55:07.699444487Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | metrics.num.samples = 2 kafka | [2024-01-22 13:55:19,123] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-22T13:55:07.705188438Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.743481ms policy-db-migrator | -------------- policy-pap | metrics.recording.level = INFO policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-22T13:55:07.708109075Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" kafka | [2024-01-22 13:55:19,125] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-db-migrator | policy-pap | metrics.sample.window.ms = 30000 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:07.709202843Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.094589ms kafka | [2024-01-22 13:55:19,125] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-22T13:55:07.712045938Z level=info msg="Executing migration" id="create cache_data table" kafka | [2024-01-22 13:55:19,125] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-pap | receive.buffer.bytes = 65536 policy-apex-pdp | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-22T13:55:07.71290679Z level=info msg="Migration successfully executed" id="create cache_data table" duration=860.292µs kafka | [2024-01-22 13:55:19,125] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-22T13:55:07.716957247Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" kafka | [2024-01-22 13:55:19,126] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | reconnect.backoff.ms = 50 policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-22T13:55:07.717927412Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=970.015µs kafka | [2024-01-22 13:55:19,132] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | request.timeout.ms = 30000 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-22T13:55:07.721361772Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-01-22 13:55:19,132] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-22T13:55:07.722234545Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=870.123µs kafka | [2024-01-22 13:55:19,132] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-22T13:55:07.726138387Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-01-22 13:55:19,133] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-pap | sasl.jaas.config = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:07.727141624Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.002277ms kafka | [2024-01-22 13:55:19,134] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-22T13:55:07.731302173Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-01-22 13:55:19,136] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-22T13:55:07.731442136Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=139.263µs kafka | [2024-01-22 13:55:19,136] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-22T13:55:07.734863836Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-01-22 13:55:19,136] INFO Kafka startTimeMs: 1705931719131 (org.apache.kafka.common.utils.AppInfoParser) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-22T13:55:07.735109853Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=245.717µs kafka | [2024-01-22 13:55:19,137] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-22T13:55:07.739032116Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | [2024-01-22 13:55:19,137] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-22T13:55:07.740442683Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.410147ms kafka | [2024-01-22 13:55:19,144] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | -------------- policy-pap | sasl.login.class = null policy-apex-pdp | security.providers = null grafana | logger=migrator t=2024-01-22T13:55:07.744195881Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" kafka | [2024-01-22 13:55:19,145] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-22T13:55:07.745257679Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.062018ms kafka | [2024-01-22 13:55:19,148] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null policy-apex-pdp | session.timeout.ms = 45000 grafana | logger=migrator t=2024-01-22T13:55:07.750337672Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" kafka | [2024-01-22 13:55:19,154] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-22T13:55:07.751328678Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=989.186µs kafka | [2024-01-22 13:55:19,178] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:07.754971254Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" kafka | [2024-01-22 13:55:19,179] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-22T13:55:07.755112337Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=142.324µs kafka | [2024-01-22 13:55:19,185] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-22T13:55:07.758017773Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | [2024-01-22 13:55:19,185] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-22T13:55:07.758970678Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=952.695µs kafka | [2024-01-22 13:55:19,185] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-22T13:55:07.762956553Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" kafka | [2024-01-22 13:55:19,192] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | ssl.key.password = null grafana | logger=migrator t=2024-01-22T13:55:07.764016511Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.057228ms kafka | [2024-01-22 13:55:19,192] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-22T13:55:07.768532449Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" kafka | [2024-01-22 13:55:19,192] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-22T13:55:07.769517265Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=983.126µs kafka | [2024-01-22 13:55:19,192] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | ssl.keystore.key = null grafana | logger=migrator t=2024-01-22T13:55:07.773977922Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" kafka | [2024-01-22 13:55:19,193] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | ssl.keystore.location = null grafana | logger=migrator t=2024-01-22T13:55:07.774966908Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=988.336µs kafka | [2024-01-22 13:55:19,267] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-01-22T13:55:07.777813003Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:07.783491761Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.678378ms kafka | [2024-01-22 13:55:19,272] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-22T13:55:07.824658751Z level=info msg="Executing migration" id="drop alert_definition table" kafka | [2024-01-22 13:55:19,307] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | ssl.provider = null grafana | logger=migrator t=2024-01-22T13:55:07.82612591Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.466338ms kafka | [2024-01-22 13:55:19,350] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-22T13:55:07.831444889Z level=info msg="Executing migration" id="delete alert_definition_version table" kafka | [2024-01-22 13:55:24,351] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-22T13:55:07.831708046Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=260.937µs kafka | [2024-01-22 13:55:24,352] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT policy-apex-pdp | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-22T13:55:07.834873639Z level=info msg="Executing migration" id="recreate alert_definition_version table" kafka | [2024-01-22 13:55:48,946] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | security.providers = null policy-apex-pdp | ssl.truststore.location = null grafana | logger=migrator t=2024-01-22T13:55:07.835792893Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=918.984µs kafka | [2024-01-22 13:55:48,952] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | ssl.truststore.password = null grafana | logger=migrator t=2024-01-22T13:55:07.839258444Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" kafka | [2024-01-22 13:55:48,954] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | policy-pap | session.timeout.ms = 45000 policy-apex-pdp | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:07.840312252Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.053318ms kafka | [2024-01-22 13:55:48,958] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-01-22 13:55:48,994] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(RF6sJHOeSLeKzNa2An6Amw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-22T13:55:07.847451369Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-apex-pdp | policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null kafka | [2024-01-22 13:55:48,994] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-22T13:55:07.848495976Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.044577ms policy-apex-pdp | [2024-01-22T13:55:49.751+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-22 13:55:48,996] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:07.852096841Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-apex-pdp | [2024-01-22T13:55:49.751+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-22T13:55:07.852363128Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=266.647µs kafka | [2024-01-22 13:55:48,997] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:49.752+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931749751 policy-db-migrator | policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-22T13:55:07.855810828Z level=info msg="Executing migration" id="drop alert_definition_version table" kafka | [2024-01-22 13:55:49,001] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:49.752+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-22T13:55:07.857303907Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.491959ms kafka | [2024-01-22 13:55:49,001] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:49.753+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=793310ba-b44a-41bd-a3a3-fc0762926d3d, alive=false, publisher=null]]: starting policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-22T13:55:07.864477365Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-01-22 13:55:49,045] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:49.830+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-22T13:55:07.86542526Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=945.765µs kafka | [2024-01-22 13:55:49,048] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-apex-pdp | acks = -1 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-22T13:55:07.8684487Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" kafka | [2024-01-22 13:55:49,049] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) policy-apex-pdp | auto.include.jmx.reporter = true policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-22T13:55:07.869447166Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=998.437µs kafka | [2024-01-22 13:55:49,051] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) policy-apex-pdp | batch.size = 16384 policy-db-migrator | policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-22T13:55:07.875534375Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" kafka | [2024-01-22 13:55:49,052] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-db-migrator | policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:07.876541252Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.006477ms kafka | [2024-01-22 13:55:49,052] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-apex-pdp | buffer.memory = 33554432 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-22T13:55:07.880079035Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" kafka | [2024-01-22 13:55:49,063] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- policy-pap | ssl.provider = null grafana | logger=migrator t=2024-01-22T13:55:07.887599032Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.519438ms kafka | [2024-01-22 13:55:49,064] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-apex-pdp | client.id = producer-1 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-22T13:55:07.890894128Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" kafka | [2024-01-22 13:55:49,065] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(Zh415qvLQvmHe6oa34REOg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-apex-pdp | compression.type = none policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-22T13:55:07.892029348Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.13482ms kafka | [2024-01-22 13:55:49,065] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-apex-pdp | connections.max.idle.ms = 540000 policy-db-migrator | policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-22T13:55:07.897016279Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" kafka | [2024-01-22 13:55:49,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | delivery.timeout.ms = 120000 policy-db-migrator | policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-01-22T13:55:07.897684876Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=668.707µs kafka | [2024-01-22 13:55:49,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | enable.idempotence = true policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-01-22T13:55:07.900521491Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" kafka | [2024-01-22 13:55:49,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | interceptor.classes = [] policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:07.935807306Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=35.283735ms kafka | [2024-01-22 13:55:49,066] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-22T13:55:07.938951938Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | linger.ms = 0 policy-db-migrator | -------------- policy-pap | grafana | logger=migrator t=2024-01-22T13:55:07.976567435Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=37.614097ms kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | max.block.ms = 60000 policy-db-migrator | policy-pap | [2024-01-22T13:55:46.459+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-22T13:55:07.981097134Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-db-migrator | policy-pap | [2024-01-22T13:55:46.460+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-22T13:55:07.981849944Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=751.689µs kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | max.request.size = 1048576 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-pap | [2024-01-22T13:55:46.460+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931746459 grafana | logger=migrator t=2024-01-22T13:55:07.984818021Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | metadata.max.age.ms = 300000 policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:46.460+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-01-22T13:55:07.985827368Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.008387ms kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | metadata.max.idle.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-22T13:55:07.989892784Z level=info msg="Executing migration" id="add current_reason column related to current_state" kafka | [2024-01-22 13:55:49,067] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:46.816+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-apex-pdp | metric.reporters = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:07.995580884Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.68757ms kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:47.004+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-apex-pdp | metrics.num.samples = 2 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.000351109Z level=info msg="Executing migration" id="create alert_rule table" kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:47.261+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1be4a7e3, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@632b96b8, org.springframework.security.web.context.SecurityContextHolderFilter@8091d80, org.springframework.security.web.header.HeaderWriterFilter@3909308c, org.springframework.security.web.authentication.logout.LogoutFilter@2ffcdc9b, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@41463c56, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@6958d5d0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7169d668, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@544e6b, org.springframework.security.web.access.ExceptionTranslationFilter@2e2cd42c, org.springframework.security.web.access.intercept.AuthorizationFilter@1adf387e] policy-apex-pdp | metrics.recording.level = INFO policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.001737495Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.385716ms kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.104+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.228+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.261+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,068] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.281+00:00|INFO|ServiceManager|main] Policy PAP starting policy-apex-pdp | partitioner.class = null policy-db-migrator | kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.281+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-apex-pdp | partitioner.ignore.keys = false policy-db-migrator | kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.283+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-apex-pdp | receive.buffer.bytes = 32768 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.283+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.283+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.284+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.284+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-apex-pdp | retries = 2147483647 policy-db-migrator | kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.290+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=79c954dd-4645-472b-b928-ee2d4186f7c1, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6af29394 policy-apex-pdp | retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-01-22 13:55:49,069] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.302+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=79c954dd-4645-472b-b928-ee2d4186f7c1, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-apex-pdp | sasl.client.callback.handler.class = null policy-db-migrator | > upgrade 0450-pdpgroup.sql kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.303+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | sasl.jaas.config = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | allow.auto.create.topics = true policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | sasl.kerberos.service.name = null policy-db-migrator | kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | auto.offset.reset = latest policy-db-migrator | kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | check.crcs = true policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | sasl.login.callback.handler.class = null kafka | [2024-01-22 13:55:49,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.id = consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3 policy-db-migrator | -------------- policy-apex-pdp | sasl.login.class = null kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.rack = policy-db-migrator | policy-apex-pdp | sasl.login.connect.timeout.ms = null kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | default.api.timeout.ms = 60000 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0470-pdp.sql policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | enable.auto.commit = true kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | exclude.internal.topics = true kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | fetch.max.bytes = 52428800 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | fetch.max.wait.ms = 500 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | fetch.min.bytes = 1 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | group.id = 79c954dd-4645-472b-b928-ee2d4186f7c1 policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-22T13:55:08.006998824Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-01-22T13:55:08.008181829Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.180896ms policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-22T13:55:08.011347508Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-pap | interceptor.classes = [] policy-db-migrator | > upgrade 0480-pdpstatistics.sql kafka | [2024-01-22 13:55:49,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-22T13:55:08.012517531Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.169863ms policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-22T13:55:08.021329499Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.022389189Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.05678ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-01-22T13:55:08.025688832Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-22T13:55:08.025776424Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=88.392µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-22T13:55:08.028695006Z level=info msg="Executing migration" id="add column for to alert_rule" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql kafka | [2024-01-22 13:55:49,072] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-22T13:55:08.037079612Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.385566ms policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,074] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-22T13:55:08.041518647Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-01-22 13:55:49,074] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-22T13:55:08.047329351Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.810263ms policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.050841689Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | metrics.num.samples = 2 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.056797737Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.953218ms kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | security.providers = null policy-pap | metrics.recording.level = INFO policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.060062089Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql grafana | logger=migrator t=2024-01-22T13:55:08.061081557Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.019498ms kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.064834333Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.065937564Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.099551ms policy-apex-pdp | ssl.cipher.suites = null policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.069525295Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.079486425Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.96048ms policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | request.timeout.ms = 30000 policy-db-migrator | kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.082798118Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-apex-pdp | ssl.engine.factory.class = null policy-pap | retry.backoff.ms = 100 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.088664313Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.866595ms policy-apex-pdp | ssl.key.password = null policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.092356217Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | sasl.jaas.config = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.09314779Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=790.992µs policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.095631159Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-apex-pdp | ssl.keystore.key = null policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.099846188Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.214739ms policy-apex-pdp | ssl.keystore.location = null policy-pap | sasl.kerberos.service.name = null policy-db-migrator | kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.102880393Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-apex-pdp | ssl.keystore.password = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.109613173Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.73212ms policy-apex-pdp | ssl.keystore.type = JKS policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.116364483Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.116515127Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=150.424µs policy-apex-pdp | ssl.provider = null policy-pap | sasl.login.class = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,076] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.124120751Z level=info msg="Executing migration" id="create alert_rule_version table" kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.125833069Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.711968ms kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | ssl.truststore.certificates = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql grafana | logger=migrator t=2024-01-22T13:55:08.129306197Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | ssl.truststore.location = null policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.130455849Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.148922ms kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | ssl.truststore.password = null policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-22T13:55:08.133587657Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | ssl.truststore.type = JKS policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.13476937Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.181283ms kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | transaction.timeout.ms = 60000 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.13900684Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-apex-pdp | transactional.id = null kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.139165794Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=159.294µs policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql grafana | logger=migrator t=2024-01-22T13:55:08.141768127Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-apex-pdp | kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.148343682Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.575325ms policy-apex-pdp | [2024-01-22T13:55:49.854+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) grafana | logger=migrator t=2024-01-22T13:55:08.15147063Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-apex-pdp | [2024-01-22T13:55:49.881+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.158322663Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.851653ms policy-apex-pdp | [2024-01-22T13:55:49.881+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.161221205Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-apex-pdp | [2024-01-22T13:55:49.881+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931749881 kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.165706231Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.482496ms policy-apex-pdp | [2024-01-22T13:55:49.882+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=793310ba-b44a-41bd-a3a3-fc0762926d3d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-01-22T13:55:08.168310314Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-apex-pdp | [2024-01-22T13:55:49.882+00:00|INFO|ServiceManager|main] service manager starting set alive kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.174432436Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.122112ms policy-apex-pdp | [2024-01-22T13:55:49.882+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-01-22T13:55:08.246637278Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-apex-pdp | [2024-01-22T13:55:49.886+00:00|INFO|ServiceManager|main] service manager starting topic sinks kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.258676086Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=12.034549ms policy-apex-pdp | [2024-01-22T13:55:49.886+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.265423036Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-apex-pdp | [2024-01-22T13:55:49.892+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.265500148Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=78.182µs policy-apex-pdp | [2024-01-22T13:55:49.892+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.providers = null policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-01-22T13:55:08.270980042Z level=info msg="Executing migration" id=create_alert_configuration_table policy-apex-pdp | [2024-01-22T13:55:49.892+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-22T13:55:08.272087454Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.105331ms kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-22T13:55:49.892+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e65163a7-0954-4bf8-9924-8c41fa40f9af, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 grafana | logger=migrator t=2024-01-22T13:55:08.276115637Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | [2024-01-22T13:55:49.893+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e65163a7-0954-4bf8-9924-8c41fa40f9af, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted grafana | logger=migrator t=2024-01-22T13:55:08.285565943Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.446935ms policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | [2024-01-22T13:55:49.893+00:00|INFO|ServiceManager|main] service manager starting Create REST server grafana | logger=migrator t=2024-01-22T13:55:08.291671815Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | [2024-01-22T13:55:49.914+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-01-22T13:55:08.291906661Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=237.167µs policy-pap | ssl.cipher.suites = null kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-apex-pdp | [] grafana | logger=migrator t=2024-01-22T13:55:08.298795685Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-apex-pdp | [2024-01-22T13:55:49.926+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-22T13:55:08.303354963Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.563068ms policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0570-toscadatatype.sql grafana | logger=migrator t=2024-01-22T13:55:08.306004848Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-01-22 13:55:49,077] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7bc51e59-0fe3-438a-aab9-b5da4616d765","timestampMs":1705931749893,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-22T13:55:08.306709688Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=706.72µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-apex-pdp | [2024-01-22T13:55:50.140+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-pap | ssl.key.password = null kafka | [2024-01-22 13:55:49,077] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | [2024-01-22T13:55:50.141+00:00|INFO|ServiceManager|main] service manager starting policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-22 13:55:49,087] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:50.141+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-22T13:55:08.309109785Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | kafka | [2024-01-22 13:55:49,088] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-22T13:55:08.317985115Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.87586ms policy-db-migrator | policy-apex-pdp | [2024-01-22T13:55:50.141+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-22T13:55:08.321501974Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-apex-pdp | [2024-01-22T13:55:50.159+00:00|INFO|ServiceManager|main] service manager started kafka | [2024-01-22 13:55:49,097] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-22T13:55:08.322238954Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=736.78µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-22T13:55:50.160+00:00|INFO|ServiceManager|main] service manager started kafka | [2024-01-22 13:55:49,207] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:08.325165507Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-apex-pdp | [2024-01-22T13:55:50.160+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. kafka | [2024-01-22 13:55:49,220] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.326144964Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=979.317µs kafka | [2024-01-22 13:55:49,232] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-apex-pdp | [2024-01-22T13:55:50.162+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | ssl.provider = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.329040586Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" kafka | [2024-01-22 13:55:49,235] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-apex-pdp | [2024-01-22T13:55:50.341+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ policy-pap | ssl.secure.random.implementation = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.336827235Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.785649ms kafka | [2024-01-22 13:55:49,237] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(RF6sJHOeSLeKzNa2An6Amw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:50.341+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql grafana | logger=migrator t=2024-01-22T13:55:08.340318613Z level=info msg="Executing migration" id="create provenance_type table" kafka | [2024-01-22 13:55:49,262] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:50.343+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-pap | ssl.truststore.certificates = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.341071844Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=752.421µs kafka | [2024-01-22 13:55:49,270] INFO [Broker id=1] Finished LeaderAndIsr request in 209ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:50.565+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | ssl.truststore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-22T13:55:08.344904632Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-apex-pdp | [2024-01-22T13:55:50.570+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] (Re-)joining group policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,275] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=RF6sJHOeSLeKzNa2An6Amw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.34591327Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.008448ms policy-apex-pdp | [2024-01-22T13:55:50.582+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Request joining group due to: need to re-join with the given member-id: consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3 policy-pap | ssl.truststore.type = JKS policy-db-migrator | kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.350340825Z level=info msg="Executing migration" id="create alert_image table" policy-apex-pdp | [2024-01-22T13:55:50.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.351054455Z level=info msg="Migration successfully executed" id="create alert_image table" duration=713.62µs policy-apex-pdp | [2024-01-22T13:55:50.583+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] (Re-)joining group policy-pap | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:50.928+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-pap | [2024-01-22T13:55:48.309+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.353484843Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-apex-pdp | [2024-01-22T13:55:50.928+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-pap | [2024-01-22T13:55:48.309+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) grafana | logger=migrator t=2024-01-22T13:55:08.35442486Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=940.147µs kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:53.595+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Successfully joined group with generation Generation{generationId=1, memberId='consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3', protocol='range'} policy-pap | [2024-01-22T13:55:48.309+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931748309 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.357466415Z level=info msg="Executing migration" id="support longer URLs in alert_image table" kafka | [2024-01-22 13:55:49,283] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:53.602+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Finished assignment for group at generation 1: {consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-01-22T13:55:48.309+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.357528697Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=63.092µs kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:53.613+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Successfully synced group in generation Generation{generationId=1, memberId='consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3', protocol='range'} policy-pap | [2024-01-22T13:55:48.310+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.363386352Z level=info msg="Executing migration" id=create_alert_configuration_history_table kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:55:53.613+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-01-22T13:55:48.310+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=43ed8a24-8339-45ee-bd66-a36eac6c670e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@49c947f7 grafana | logger=migrator t=2024-01-22T13:55:08.364313828Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=927.296µs policy-apex-pdp | [2024-01-22T13:55:53.615+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-01-22T13:55:48.310+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=43ed8a24-8339-45ee-bd66-a36eac6c670e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | > upgrade 0610-toscanodetemplates.sql kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.367167568Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-apex-pdp | [2024-01-22T13:55:53.654+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-01-22T13:55:48.311+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.368119735Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=955.127µs policy-apex-pdp | [2024-01-22T13:55:53.671+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2, groupId=e65163a7-0954-4bf8-9924-8c41fa40f9af] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | allow.auto.create.topics = true policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.371748207Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-apex-pdp | [2024-01-22T13:55:56.155+00:00|INFO|RequestLog|qtp830863979-31] 172.17.0.2 - policyadmin [22/Jan/2024:13:55:56 +0000] "GET /metrics HTTP/1.1" 200 10639 "-" "Prometheus/2.49.1" policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.372324063Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-apex-pdp | [2024-01-22T13:56:09.893+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | auto.include.jmx.reporter = true policy-db-migrator | kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.375078941Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c841c6d7-f089-47c8-88dd-5f2b47d779a1","timestampMs":1705931769892,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} policy-pap | auto.offset.reset = latest policy-db-migrator | kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.375560795Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=483.153µs policy-apex-pdp | [2024-01-22T13:56:09.913+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.37858466Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c841c6d7-f089-47c8-88dd-5f2b47d779a1","timestampMs":1705931769892,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} policy-pap | check.crcs = true policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.379398422Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=814.442µs policy-apex-pdp | [2024-01-22T13:56:09.916+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.383500878Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.084+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | client.id = consumer-policy-pap-4 grafana | logger=migrator t=2024-01-22T13:55:08.391941115Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.437307ms policy-db-migrator | kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | client.rack = grafana | logger=migrator t=2024-01-22T13:55:08.396190155Z level=info msg="Executing migration" id="create library_element table v1" policy-db-migrator | kafka | [2024-01-22 13:55:49,284] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.093+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-01-22T13:55:08.397152402Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=964.427µs policy-db-migrator | > upgrade 0630-toscanodetype.sql kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.093+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-22T13:55:08.401803763Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5cc52511-b48b-4fa0-a6e6-c267e358d5d1","timestampMs":1705931770093,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-01-22T13:55:08.402904424Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.100941ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.095+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-01-22T13:55:08.408198113Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"29114705-8887-4977-854d-e6da3f475b73","timestampMs":1705931770095,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-01-22T13:55:08.408912873Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=715.29µs policy-db-migrator | kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.117+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-01-22T13:55:08.412741761Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-db-migrator | kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5cc52511-b48b-4fa0-a6e6-c267e358d5d1","timestampMs":1705931770093,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-01-22T13:55:08.413986026Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.244725ms policy-db-migrator | > upgrade 0640-toscanodetypes.sql kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.118+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-01-22T13:55:08.420321734Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | group.instance.id = null policy-apex-pdp | [2024-01-22T13:56:10.126+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"29114705-8887-4977-854d-e6da3f475b73","timestampMs":1705931770095,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.127+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-22T13:56:10.160+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | interceptor.classes = [] kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1e11feec-3c6a-4861-a178-a1d471866c80","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-22T13:56:10.163+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | internal.leave.group.on.close = true kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1e11feec-3c6a-4861-a178-a1d471866c80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"029c38c3-9b61-40d5-84a1-9b0554ef65e1","timestampMs":1705931770162,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-22T13:56:10.175+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-01-22 13:55:49,285] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1e11feec-3c6a-4861-a178-a1d471866c80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"029c38c3-9b61-40d5-84a1-9b0554ef65e1","timestampMs":1705931770162,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-22T13:56:10.176+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-pap | isolation.level = read_uncommitted kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.217+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8cc3dde7-8a50-459b-a008-976e7631331f","timestampMs":1705931770192,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.218+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8cc3dde7-8a50-459b-a008-976e7631331f","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"49cfa103-b65d-4118-9a66-07d688cd7200","timestampMs":1705931770218,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-01-22T13:56:10.227+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8cc3dde7-8a50-459b-a008-976e7631331f","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"49cfa103-b65d-4118-9a66-07d688cd7200","timestampMs":1705931770218,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.421778595Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.457191ms policy-apex-pdp | [2024-01-22T13:56:10.228+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | policy-pap | max.poll.records = 500 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.425686895Z level=info msg="Executing migration" id="increase max description length to 2048" policy-apex-pdp | [2024-01-22T13:56:56.082+00:00|INFO|RequestLog|qtp830863979-28] 172.17.0.2 - policyadmin [22/Jan/2024:13:56:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.49.1" policy-db-migrator | policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.425715355Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=31.64µs policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-pap | metric.reporters = [] kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.432501256Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.43261546Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=120.394µs policy-pap | metrics.recording.level = INFO kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-22T13:55:08.437306752Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.437935509Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=629.948µs policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.463152629Z level=info msg="Executing migration" id="create data_keys table" policy-pap | receive.buffer.bytes = 65536 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.464692092Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.542973ms policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0670-toscapolicies.sql grafana | logger=migrator t=2024-01-22T13:55:08.472653816Z level=info msg="Executing migration" id="create secrets table" policy-pap | reconnect.backoff.ms = 50 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.473252313Z level=info msg="Migration successfully executed" id="create secrets table" duration=599.707µs policy-pap | request.timeout.ms = 30000 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) grafana | logger=migrator t=2024-01-22T13:55:08.478493Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-pap | retry.backoff.ms = 100 kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:08.526527092Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=48.032331ms policy-pap | sasl.client.callback.handler.class = null kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.531026858Z level=info msg="Executing migration" id="add name column into data_keys" policy-pap | sasl.jaas.config = null kafka | [2024-01-22 13:55:49,286] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.539125466Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=8.095428ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql grafana | logger=migrator t=2024-01-22T13:55:08.542996835Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-22T13:55:08.543196551Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=197.635µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-22T13:55:08.54567328Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-22T13:55:08.595851412Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=50.172242ms policy-db-migrator | kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-22T13:55:08.803276087Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-db-migrator | kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-01-22T13:55:08.859097928Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=55.82038ms policy-db-migrator | > upgrade 0690-toscapolicy.sql kafka | [2024-01-22 13:55:49,286] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-22T13:55:08.863120631Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-22T13:55:08.863907283Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=790.222µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-22T13:55:08.867557266Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-22T13:55:08.868505722Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=947.786µs policy-db-migrator | kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-22T13:55:08.87303759Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-db-migrator | kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-22T13:55:08.873369159Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=330.929µs policy-db-migrator | > upgrade 0700-toscapolicytype.sql kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:08.877442214Z level=info msg="Executing migration" id="create permission table" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-22T13:55:08.878355689Z level=info msg="Migration successfully executed" id="create permission table" duration=912.825µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-22T13:55:08.883609087Z level=info msg="Executing migration" id="add unique index permission.role_id" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-22T13:55:08.884935165Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.325767ms policy-db-migrator | kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-22T13:55:08.887641301Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-db-migrator | kafka | [2024-01-22 13:55:49,287] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-22T13:55:08.888881916Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.240365ms policy-db-migrator | > upgrade 0710-toscapolicytypes.sql kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:08.892724314Z level=info msg="Executing migration" id="create role table" kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-22T13:55:08.893571428Z level=info msg="Migration successfully executed" id="create role table" duration=846.513µs kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-22T13:55:08.898659651Z level=info msg="Executing migration" id="add column display_name" policy-db-migrator | policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-22T13:55:08.907201191Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.54135ms kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-22T13:55:08.910893935Z level=info msg="Executing migration" id="add column group_name" kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-22T13:55:08.916488312Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.595407ms kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-22T13:55:08.922981945Z level=info msg="Executing migration" id="add index role.org_id" kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | security.providers = null grafana | logger=migrator t=2024-01-22T13:55:08.923998834Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.016798ms kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-22T13:55:08.929970272Z level=info msg="Executing migration" id="add unique index role_org_id_name" kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-01-22T13:55:08.931942387Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.971316ms kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-22T13:55:08.935995761Z level=info msg="Executing migration" id="add index role_org_id_uid" kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:08.937171824Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.175173ms kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-22T13:55:08.942680669Z level=info msg="Executing migration" id="create team role table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-22T13:55:08.943552624Z level=info msg="Migration successfully executed" id="create team role table" duration=871.735µs kafka | [2024-01-22 13:55:49,287] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-22T13:55:08.947457813Z level=info msg="Executing migration" id="add index team_role.org_id" kafka | [2024-01-22 13:55:49,288] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-db-migrator | policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-22T13:55:08.948553384Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.094951ms kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-22T13:55:08.951934449Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-22T13:55:08.953225536Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.292317ms kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-22T13:55:08.956830867Z level=info msg="Executing migration" id="add index team_role.team_id" kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-22T13:55:08.958607067Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.77539ms kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-22T13:55:08.96260951Z level=info msg="Executing migration" id="create user role table" kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-22T13:55:08.963419603Z level=info msg="Migration successfully executed" id="create user role table" duration=808.152µs kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:08.967381684Z level=info msg="Executing migration" id="add index user_role.org_id" kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-22T13:55:08.968617509Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.232855ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-01-22T13:55:08.973258449Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-22T13:55:08.974630778Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.370809ms policy-db-migrator | kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-22T13:55:08.977278752Z level=info msg="Executing migration" id="add index user_role.user_id" policy-db-migrator | kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-22T13:55:08.97862678Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.347598ms policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-01-22T13:55:08.984702311Z level=info msg="Executing migration" id="create builtin role table" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-01-22T13:55:08.985661728Z level=info msg="Migration successfully executed" id="create builtin role table" duration=960.817µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:08.990859604Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-22T13:55:08.991885653Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.027679ms policy-db-migrator | kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | grafana | logger=migrator t=2024-01-22T13:55:08.995272249Z level=info msg="Executing migration" id="add index builtin_role.name" kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:08.996306688Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.035699ms policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-01-22 13:55:49,288] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:08.999706523Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,288] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.007784114Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.076181ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) kafka | [2024-01-22 13:55:49,289] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931748316 grafana | logger=migrator t=2024-01-22T13:55:09.013132475Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.015888908Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.763783ms kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T13:55:48.316+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.068863313Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T13:55:48.317+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=43ed8a24-8339-45ee-bd66-a36eac6c670e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | > upgrade 0780-toscarequirements.sql grafana | logger=migrator t=2024-01-22T13:55:09.070776784Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.917031ms kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T13:55:48.317+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=79c954dd-4645-472b-b928-ee2d4186f7c1, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.075780306Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T13:55:48.317+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2e284948-a74b-435c-88b6-e1422c57c262, alive=false, publisher=null]]: starting policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) grafana | logger=migrator t=2024-01-22T13:55:09.077109791Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.329925ms kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-22T13:55:48.334+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.081441425Z level=info msg="Executing migration" id="add unique index role.uid" kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | acks = -1 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.082580385Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.13883ms kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.088124931Z level=info msg="Executing migration" id="create seed assignment table" kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | batch.size = 16384 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql grafana | logger=migrator t=2024-01-22T13:55:09.08922793Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.102739ms kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.094335344Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | buffer.memory = 33554432 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-22T13:55:09.095501735Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.163911ms kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.103221109Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-01-22 13:55:49,290] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.id = producer-1 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.109184966Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.966178ms kafka | [2024-01-22 13:55:49,291] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | compression.type = none policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.112079602Z level=info msg="Executing migration" id="permission kind migration" kafka | [2024-01-22 13:55:49,291] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql grafana | logger=migrator t=2024-01-22T13:55:09.121070749Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.989217ms kafka | [2024-01-22 13:55:49,291] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-01-22T13:55:09.223675972Z level=info msg="Executing migration" id="permission attribute migration" policy-pap | enable.idempotence = true policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.239710374Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=16.039292ms policy-pap | interceptor.classes = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) grafana | logger=migrator t=2024-01-22T13:55:09.243140785Z level=info msg="Executing migration" id="permission identifier migration" kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.248802144Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.660959ms policy-pap | linger.ms = 0 kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.255185732Z level=info msg="Executing migration" id="add permission identifier index" policy-pap | max.block.ms = 60000 kafka | [2024-01-22 13:55:49,293] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql grafana | logger=migrator t=2024-01-22T13:55:09.256384584Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.198862ms policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.261448847Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2024-01-22T13:55:09.263586183Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=2.136716ms kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-01-22T13:55:09.26952189Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.270894286Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.371996ms policy-db-migrator | policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.275387704Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-db-migrator | policy-pap | metric.reporters = [] kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.275548718Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=160.644µs policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-pap | metrics.num.samples = 2 kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.278930968Z level=info msg="Executing migration" id="rbac disabled migrator" policy-db-migrator | -------------- policy-pap | metrics.recording.level = INFO kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.27902339Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=91.613µs policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.282592934Z level=info msg="Executing migration" id="teams permissions migration" policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.283100197Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=507.083µs policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.28547304Z level=info msg="Executing migration" id="dashboard permissions" policy-pap | partitioner.class = null policy-db-migrator | kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.288862639Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=3.390319ms policy-pap | partitioner.ignore.keys = false policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.299020697Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-pap | receive.buffer.bytes = 32768 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.299749636Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=730.499µs policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) kafka | [2024-01-22 13:55:49,294] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.303226898Z level=info msg="Executing migration" id="drop managed folder create actions" policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.303527556Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=300.247µs policy-pap | request.timeout.ms = 30000 policy-db-migrator | kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.307925391Z level=info msg="Executing migration" id="alerting notification permissions" policy-pap | retries = 2147483647 policy-db-migrator | kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | retry.backoff.ms = 100 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.308423325Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=500.963µs policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | sasl.client.callback.handler.class = null kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.312667836Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | -------------- policy-pap | sasl.jaas.config = null kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.31393526Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.271314ms policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.320860822Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.322157916Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.297044ms policy-db-migrator | policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.325408682Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.333884795Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.541765ms policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.33937189Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null kafka | [2024-01-22 13:55:49,295] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.339468722Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=95.952µs policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-pap | sasl.login.class = null kafka | [2024-01-22 13:55:49,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.343200191Z level=info msg="Executing migration" id="create correlation table v1" policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | kafka | [2024-01-22 13:55:49,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.344122315Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=920.344µs policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | kafka | [2024-01-22 13:55:49,296] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.348025828Z level=info msg="Executing migration" id="add index correlations.uid" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-01-22 13:55:49,296] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.349498797Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.473189ms policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,299] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.35494215Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-22 13:55:49,299] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-22T13:55:09.357350633Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.406563ms kafka | [2024-01-22 13:55:49,299] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.361441601Z level=info msg="Executing migration" id="add correlation config column" policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.370152461Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.71142ms policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.373669613Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.374445224Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=775.771µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-22 13:55:49,300] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.380032361Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-01-22 13:55:49,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.381251023Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.218502ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-22 13:55:49,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.384442997Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | kafka | [2024-01-22 13:55:49,301] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.41908962Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=34.626292ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.42631441Z level=info msg="Executing migration" id="create correlation v2" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.428918839Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.435384ms policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.433032967Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.434879046Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.845149ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-22T13:55:09.442980389Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-db-migrator | policy-pap | security.protocol = PLAINTEXT kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.445060644Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.083375ms policy-pap | security.providers = null kafka | [2024-01-22 13:55:49,302] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.452315295Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-pap | send.buffer.bytes = 131072 kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.453498566Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.185011ms kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-22T13:55:09.457592144Z level=info msg="Executing migration" id="copy correlation v1 to v2" kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-22T13:55:09.457935393Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=344.209µs kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-22T13:55:09.46007369Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-22T13:55:09.461262161Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.189141ms kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-22T13:55:09.466892649Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-22T13:55:09.474705645Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.811466ms kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-22T13:55:09.478522606Z level=info msg="Executing migration" id="create entity_events table" policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,303] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.479359188Z level=info msg="Migration successfully executed" id="create entity_events table" duration=835.962µs policy-pap | ssl.keystore.key = null policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.485442578Z level=info msg="Executing migration" id="create dashboard public config v1" policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.486433244Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=991.006µs policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.489259729Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.489718311Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-pap | ssl.keystore.type = JKS kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.493719376Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.494405074Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | ssl.provider = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.497520446Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-pap | ssl.secure.random.implementation = null policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.498393119Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=872.163µs policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.502146868Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.503118104Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=971.266µs policy-pap | ssl.truststore.certificates = null kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.50562119Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-pap | ssl.truststore.location = null policy-db-migrator | kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.506708998Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.089909ms policy-pap | ssl.truststore.password = null policy-db-migrator | kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.511944026Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-01-22 13:55:49,304] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.513197219Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.252863ms policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | transactional.id = null policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-01-22T13:55:09.520463451Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.521865048Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.404287ms policy-pap | policy-db-migrator | kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.525313858Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-01-22T13:55:09.526093589Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=779.801µs policy-pap | [2024-01-22T13:55:48.347+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.530857944Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-pap | [2024-01-22T13:55:48.364+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-22T13:55:09.531425769Z level=info msg="Migration successfully executed" id="Drop public config table" duration=567.955µs policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.534749607Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-22T13:55:48.364+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.535854246Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.104099ms policy-pap | [2024-01-22T13:55:48.364+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931748364 policy-db-migrator | kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.541173446Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-pap | [2024-01-22T13:55:48.365+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2e284948-a74b-435c-88b6-e1422c57c262, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-01-22T13:55:09.542295886Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.12239ms policy-pap | [2024-01-22T13:55:48.365+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9af27cde-aa4f-4a25-a18a-53e421ca9375, alive=false, publisher=null]]: starting policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-01-22T13:55:09.698489961Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-pap | [2024-01-22T13:55:48.365+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.700107993Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.621143ms policy-db-migrator | kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.707119168Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-pap | acks = -1 policy-db-migrator | kafka | [2024-01-22 13:55:49,305] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-01-22T13:55:09.708196566Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.078138ms policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.71288599Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-pap | batch.size = 16384 grafana | logger=migrator t=2024-01-22T13:55:09.752212446Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=39.321346ms kafka | [2024-01-22 13:55:49,306] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-01-22T13:55:09.758296686Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-01-22 13:55:49,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | -------------- policy-pap | buffer.memory = 33554432 grafana | logger=migrator t=2024-01-22T13:55:09.764991163Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.704217ms kafka | [2024-01-22 13:55:49,324] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.768591187Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-pap | client.id = producer-2 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-01-22T13:55:09.774690348Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.098611ms policy-pap | compression.type = none kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | -------------- policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-22T13:55:09.779495905Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.779830844Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=335.488µs policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.783418498Z level=info msg="Executing migration" id="add share column" kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.793055652Z level=info msg="Migration successfully executed" id="add share column" duration=9.636074ms policy-pap | enable.idempotence = true policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql grafana | logger=migrator t=2024-01-22T13:55:09.796143743Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.796375799Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=230.666µs policy-pap | interceptor.classes = [] policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-22T13:55:09.800031496Z level=info msg="Executing migration" id="create file table" policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.800767745Z level=info msg="Migration successfully executed" id="create file table" duration=736.019µs policy-pap | linger.ms = 0 policy-db-migrator | policy-pap | max.block.ms = 60000 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.80360676Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | policy-pap | max.request.size = 1048576 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.805232943Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.625253ms policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.809452704Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.810574803Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.122819ms policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-22T13:55:09.815069372Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.815824032Z level=info msg="Migration successfully executed" id="create file_meta table" duration=754.86µs kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | policy-pap | metrics.recording.level = INFO kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-01-22T13:55:09.81992096Z level=info msg="Executing migration" id="file table idx: path key" policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.821175893Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.252783ms policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.826605316Z level=info msg="Executing migration" id="set path collation in file table" policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-22 13:55:49,325] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.826690718Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=89.482µs policy-pap | partitioner.class = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | partitioner.ignore.keys = false policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.830763005Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-pap | receive.buffer.bytes = 32768 policy-db-migrator | kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-01-22T13:55:09.830839657Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=75.452µs kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.835299365Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-22T13:55:09.835850659Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=551.114µs kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | retries = 2147483647 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.840482511Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.840685797Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=202.516µs kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.845443362Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-22T13:55:09.846439748Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=998.686µs kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-01-22T13:55:09.850834634Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-22T13:55:09.860790856Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.954882ms kafka | [2024-01-22 13:55:49,326] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.864446853Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-22T13:55:09.864637338Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=191.395µs kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:09.867562785Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.868659064Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.096399ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.872411483Z level=info msg="Executing migration" id="update group index for alert rules" policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.872796463Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=385.511µs policy-pap | sasl.login.class = null policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.876948672Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.877153107Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=200.935µs kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-22T13:55:09.882117298Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.88256762Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=452.152µs policy-db-migrator | kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:09.886593156Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-22T13:55:09.90080247Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=14.205924ms policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-01-22T13:55:09.904849497Z level=info msg="Executing migration" id="add scope column to seed_assignment" kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-22T13:55:09.916394901Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=11.544874ms policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-22 13:55:49,327] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | [2024-01-22 13:55:49,329] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-22 13:55:49,329] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-01-22T13:55:09.920932121Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-01-22 13:55:49,337] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:09.922151453Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.219012ms policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-22 13:55:49,338] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-22 13:55:49,339] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:09.92620647Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,339] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | kafka | [2024-01-22 13:55:49,339] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:10.051176416Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=124.966476ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-01-22 13:55:49,350] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-22T13:55:10.064630829Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-01-22 13:55:49,351] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-01-22T13:55:10.067262448Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.591258ms policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,351] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-01-22 13:55:49,351] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:10.072231799Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-01-22 13:55:49,351] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:10.074238551Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.006962ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-01-22 13:55:49,359] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-22T13:55:10.077925198Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-db-migrator | policy-pap | security.protocol = PLAINTEXT kafka | [2024-01-22 13:55:49,360] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | security.providers = null kafka | [2024-01-22 13:55:49,360] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-22T13:55:10.116800967Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=38.874249ms policy-db-migrator | > upgrade 0100-pdp.sql kafka | [2024-01-22 13:55:49,360] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-22T13:55:10.126902712Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,361] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:10.127599561Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=695.478µs kafka | [2024-01-22 13:55:49,370] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY grafana | logger=migrator t=2024-01-22T13:55:10.132827668Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-01-22 13:55:49,371] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:10.13329017Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=461.982µs policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | kafka | [2024-01-22 13:55:49,371] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:10.136401911Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | kafka | [2024-01-22 13:55:49,371] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.engine.factory.class = null policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-01-22T13:55:10.13674009Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=337.889µs kafka | [2024-01-22 13:55:49,371] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:10.140105439Z level=info msg="Executing migration" id="create folder table" kafka | [2024-01-22 13:55:49,379] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-01-22T13:55:10.141194467Z level=info msg="Migration successfully executed" id="create folder table" duration=1.088508ms policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-22T13:55:10.147706148Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-01-22 13:55:49,380] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-22T13:55:10.149733711Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.024093ms kafka | [2024-01-22 13:55:49,380] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-22T13:55:10.154912877Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-01-22 13:55:49,380] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-22T13:55:10.15654173Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.627823ms kafka | [2024-01-22 13:55:49,380] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-22T13:55:10.161433238Z level=info msg="Executing migration" id="Update folder title length" kafka | [2024-01-22 13:55:49,389] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-01-22 13:55:49,390] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-01-22T13:55:10.161458119Z level=info msg="Migration successfully executed" id="Update folder title length" duration=24.481µs policy-pap | ssl.provider = null kafka | [2024-01-22 13:55:49,390] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:10.173038392Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-22 13:55:49,390] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:10.175152658Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.113926ms policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:10.183021104Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-01-22 13:55:49,390] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0130-pdpstatistics.sql grafana | logger=migrator t=2024-01-22T13:55:10.184413621Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.391977ms policy-pap | ssl.truststore.certificates = null kafka | [2024-01-22 13:55:49,401] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:10.191735363Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-pap | ssl.truststore.location = null kafka | [2024-01-22 13:55:49,403] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-22T13:55:10.192941164Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.205651ms policy-pap | ssl.truststore.password = null policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL grafana | logger=migrator t=2024-01-22T13:55:10.201059957Z level=info msg="Executing migration" id="create anon_device table" policy-pap | ssl.truststore.type = JKS kafka | [2024-01-22 13:55:49,403] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:10.20193567Z level=info msg="Migration successfully executed" id="create anon_device table" duration=875.803µs policy-pap | transaction.timeout.ms = 60000 kafka | [2024-01-22 13:55:49,403] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:10.211922702Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-pap | transactional.id = null kafka | [2024-01-22 13:55:49,403] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-22T13:55:10.213485073Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.560951ms kafka | [2024-01-22 13:55:49,409] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-22T13:55:10.223827424Z level=info msg="Executing migration" id="add index anon_device.updated_at" policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql grafana | logger=migrator t=2024-01-22T13:55:10.225055786Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.228312ms policy-pap | kafka | [2024-01-22 13:55:49,409] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:10.232846611Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-01-22 13:55:49,410] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num grafana | logger=migrator t=2024-01-22T13:55:10.234568946Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.719175ms policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:48.366+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-01-22 13:55:49,410] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-22T13:55:10.245983005Z level=info msg="Executing migration" id="add unique index signing_key.key_id" kafka | [2024-01-22 13:55:49,410] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-22T13:55:10.247137496Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.155401ms kafka | [2024-01-22 13:55:49,416] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705931748369 kafka | [2024-01-22 13:55:49,416] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-22T13:55:10.257020395Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9af27cde-aa4f-4a25-a18a-53e421ca9375, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | kafka | [2024-01-22 13:55:49,416] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:10.258413261Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.393076ms policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | kafka | [2024-01-22 13:55:49,416] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:10.265649401Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-pap | [2024-01-22T13:55:48.369+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | > upgrade 0150-pdpstatistics.sql kafka | [2024-01-22 13:55:49,417] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:10.265952519Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=303.148µs policy-pap | [2024-01-22T13:55:48.372+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,423] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-22T13:55:10.268801114Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-pap | [2024-01-22T13:55:48.372+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-01-22 13:55:49,423] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-22T13:55:10.278612051Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.811417ms policy-pap | [2024-01-22T13:55:48.375+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,423] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:10.283967561Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" policy-pap | [2024-01-22T13:55:48.375+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-db-migrator | kafka | [2024-01-22 13:55:49,423] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:10.284604738Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=638.997µs policy-pap | [2024-01-22T13:55:48.375+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-db-migrator | kafka | [2024-01-22 13:55:49,423] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:10.288582522Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" policy-pap | [2024-01-22T13:55:48.376+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer kafka | [2024-01-22 13:55:49,430] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-22T13:55:10.289740003Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.157601ms policy-pap | [2024-01-22T13:55:48.380+00:00|INFO|TimerManager|Thread-10] timer manager state-change started grafana | logger=migrator t=2024-01-22T13:55:10.293973744Z level=info msg="Executing migration" id="create sso_setting table" policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql kafka | [2024-01-22 13:55:49,430] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:55:48.378+00:00|INFO|TimerManager|Thread-9] timer manager update started grafana | logger=migrator t=2024-01-22T13:55:10.294885568Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=911.614µs policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,430] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:48.381+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-01-22 13:55:49,431] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:10.299794447Z level=info msg="Executing migration" id="copy kvstore migration status to each org" policy-pap | [2024-01-22T13:55:48.383+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.156 seconds (process running for 11.818) kafka | [2024-01-22 13:55:49,431] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-22T13:55:10.300511915Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=718.069µs policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:48.896+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ kafka | [2024-01-22 13:55:49,438] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-22T13:55:10.303279058Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-db-migrator | policy-pap | [2024-01-22T13:55:48.896+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ kafka | [2024-01-22 13:55:49,439] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-22T13:55:10.303539515Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=260.677µs policy-db-migrator | policy-pap | [2024-01-22T13:55:48.902+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-22 13:55:49,439] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-22T13:55:10.306201165Z level=info msg="migrations completed" performed=523 skipped=0 duration=4.454310929s policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-pap | [2024-01-22T13:55:48.902+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ kafka | [2024-01-22 13:55:49,439] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=sqlstore t=2024-01-22T13:55:10.315144169Z level=info msg="Created default admin" user=admin policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:48.967+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 kafka | [2024-01-22 13:55:49,439] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=sqlstore t=2024-01-22T13:55:10.315417616Z level=info msg="Created default organization" policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-pap | [2024-01-22T13:55:48.968+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 kafka | [2024-01-22 13:55:49,445] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=secrets t=2024-01-22T13:55:10.320687234Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 policy-db-migrator | JOIN pdpstatistics b policy-pap | [2024-01-22T13:55:48.987+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 13:55:49,446] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=plugin.store t=2024-01-22T13:55:10.337409003Z level=info msg="Loading plugins..." policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-pap | [2024-01-22T13:55:48.987+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: YXDHh3LaSIyP8FezJr0IvQ kafka | [2024-01-22 13:55:49,446] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) grafana | logger=local.finder t=2024-01-22T13:55:10.373310644Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled policy-db-migrator | SET a.id = b.id policy-pap | [2024-01-22T13:55:49.029+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 13:55:49,446] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=plugin.store t=2024-01-22T13:55:10.373364376Z level=info msg="Plugins loaded" count=55 duration=35.955923ms policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:49.134+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-22 13:55:49,446] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=query_data t=2024-01-22T13:55:10.376822017Z level=info msg="Query Service initialization" policy-db-migrator | policy-pap | [2024-01-22T13:55:49.153+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-22 13:55:49,639] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=live.push_http t=2024-01-22T13:55:10.390796243Z level=info msg="Live Push Gateway initialization" policy-db-migrator | kafka | [2024-01-22 13:55:49,640] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:55:49.261+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=ngalert.migration t=2024-01-22T13:55:10.396152844Z level=info msg=Starting policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql kafka | [2024-01-22 13:55:49,640] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:49.285+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=ngalert.migration orgID=1 t=2024-01-22T13:55:10.396823761Z level=info msg="Migrating alerts for organisation" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,640] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:50.482+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=ngalert.migration orgID=1 t=2024-01-22T13:55:10.397111209Z level=info msg="Alerts found to migrate" alerts=0 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp kafka | [2024-01-22 13:55:49,640] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:55:50.488+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group grafana | logger=ngalert.migration orgID=1 t=2024-01-22T13:55:10.397408026Z level=warn msg="No available receivers" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,649] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-22T13:55:50.504+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-22T13:55:10.400157809Z level=info msg="Completed legacy migration" policy-db-migrator | kafka | [2024-01-22 13:55:49,650] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:55:50.509+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] (Re-)joining group grafana | logger=infra.usagestats.collector t=2024-01-22T13:55:10.495515589Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 policy-db-migrator | kafka | [2024-01-22 13:55:49,650] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:50.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7 grafana | logger=provisioning.datasources t=2024-01-22T13:55:10.498240121Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql kafka | [2024-01-22 13:55:49,650] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=provisioning.alerting t=2024-01-22T13:55:10.511323074Z level=info msg="starting to provision alerting" policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:50.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-01-22 13:55:49,650] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-pap | [2024-01-22T13:55:50.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group grafana | logger=provisioning.alerting t=2024-01-22T13:55:10.511355205Z level=info msg="finished to provision alerting" kafka | [2024-01-22 13:55:49,666] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:50.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Request joining group due to: need to re-join with the given member-id: consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e grafana | logger=ngalert.state.manager t=2024-01-22T13:55:10.511643692Z level=info msg="Warming state cache for startup" kafka | [2024-01-22 13:55:49,667] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-22T13:55:50.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) grafana | logger=ngalert.multiorg.alertmanager t=2024-01-22T13:55:10.511707334Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-01-22 13:55:49,668] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:50.520+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] (Re-)joining group grafana | logger=ngalert.state.manager t=2024-01-22T13:55:10.512125375Z level=info msg="State cache has been initialized" states=0 duration=480.233µs policy-db-migrator | policy-pap | [2024-01-22T13:55:53.545+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7', protocol='range'} kafka | [2024-01-22 13:55:49,669] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.scheduler t=2024-01-22T13:55:10.512172946Z level=info msg="Starting scheduler" tickInterval=10s policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-pap | [2024-01-22T13:55:53.547+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Successfully joined group with generation Generation{generationId=1, memberId='consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e', protocol='range'} kafka | [2024-01-22 13:55:49,669] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:55:53.553+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Finished assignment for group at generation 1: {consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e=Assignment(partitions=[policy-pdp-pap-0])} grafana | logger=ticker t=2024-01-22T13:55:10.512243418Z level=info msg=starting first_tick=2024-01-22T13:55:20Z kafka | [2024-01-22 13:55:49,681] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-22T13:55:53.554+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7=Assignment(partitions=[policy-pdp-pap-0])} grafana | logger=http.server t=2024-01-22T13:55:10.514105877Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) kafka | [2024-01-22 13:55:49,683] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:55:53.610+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Successfully synced group in generation Generation{generationId=1, memberId='consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e', protocol='range'} grafana | logger=grafanaStorageLogger t=2024-01-22T13:55:10.515531404Z level=info msg="Storage starting" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,683] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:53.611+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=grafana.update.checker t=2024-01-22T13:55:10.551189049Z level=info msg="Update check succeeded" duration=35.884781ms policy-db-migrator | kafka | [2024-01-22 13:55:49,683] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:53.612+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7', protocol='range'} grafana | logger=sqlstore.transactions t=2024-01-22T13:55:10.587311377Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-db-migrator | kafka | [2024-01-22 13:55:49,684] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:55:53.613+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=plugins.update.checker t=2024-01-22T13:55:10.59734089Z level=info msg="Update check succeeded" duration=82.044212ms policy-db-migrator | > upgrade 0210-sequence.sql kafka | [2024-01-22 13:55:49,697] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-22T13:55:53.617+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=sqlstore.transactions t=2024-01-22T13:55:10.606675295Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,698] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:55:53.623+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=sqlstore.transactions t=2024-01-22T13:55:10.617859558Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-22 13:55:49,698] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:53.654+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=sqlstore.transactions t=2024-01-22T13:55:10.669096692Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,698] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:53.655+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=infra.usagestats t=2024-01-22T13:56:30.526440179Z level=info msg="Usage stats are ready to report" policy-db-migrator | kafka | [2024-01-22 13:55:49,698] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:55:53.678+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | kafka | [2024-01-22 13:55:49,711] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-22T13:55:53.678+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3, groupId=79c954dd-4645-472b-b928-ee2d4186f7c1] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | > upgrade 0220-sequence.sql kafka | [2024-01-22 13:55:49,712] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:55:54.915+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,712] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:54.915+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-01-22 13:55:49,712] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:55:54.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 3 ms policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,712] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:56:09.928+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-db-migrator | kafka | [2024-01-22 13:55:49,729] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [] policy-db-migrator | kafka | [2024-01-22 13:55:49,730] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql kafka | [2024-01-22 13:55:49,730] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:56:09.929+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,730] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c841c6d7-f089-47c8-88dd-5f2b47d779a1","timestampMs":1705931769892,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) kafka | [2024-01-22 13:55:49,730] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:56:09.929+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,742] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c841c6d7-f089-47c8-88dd-5f2b47d779a1","timestampMs":1705931769892,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | [2024-01-22 13:55:49,743] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:56:09.938+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | kafka | [2024-01-22 13:55:49,743] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:56:10.043+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql kafka | [2024-01-22 13:55:49,743] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:56:10.043+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting listener policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,743] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:56:10.043+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting timer policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-01-22 13:55:49,759] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-22T13:56:10.043+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f41ee548-273a-4dd1-a197-a877ac7fd0e5, expireMs=1705931800043] policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,760] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:56:10.045+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=f41ee548-273a-4dd1-a197-a877ac7fd0e5, expireMs=1705931800043] policy-db-migrator | kafka | [2024-01-22 13:55:49,760] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:56:10.045+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting enqueue policy-db-migrator | kafka | [2024-01-22 13:55:49,760] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:56:10.045+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate started policy-db-migrator | > upgrade 0120-toscatrigger.sql kafka | [2024-01-22 13:55:49,760] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:56:10.046+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,774] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | DROP TABLE IF EXISTS toscatrigger kafka | [2024-01-22 13:55:49,774] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:56:10.083+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,774] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | [2024-01-22T13:56:10.083+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-01-22 13:55:49,774] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.084+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-22 13:55:49,775] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:49,787] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.084+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-01-22 13:55:49,789] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | [2024-01-22T13:56:10.105+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-22 13:55:49,789] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5cc52511-b48b-4fa0-a6e6-c267e358d5d1","timestampMs":1705931770093,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} kafka | [2024-01-22 13:55:49,789] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.108+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-22 13:55:49,789] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5cc52511-b48b-4fa0-a6e6-c267e358d5d1","timestampMs":1705931770093,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup"} kafka | [2024-01-22 13:55:49,842] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-pap | [2024-01-22T13:56:10.109+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-01-22 13:55:49,843] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.116+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-22 13:55:49,844] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"29114705-8887-4977-854d-e6da3f475b73","timestampMs":1705931770095,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:49,844] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.134+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping kafka | [2024-01-22 13:55:49,844] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping enqueue kafka | [2024-01-22 13:55:49,855] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping timer kafka | [2024-01-22 13:55:49,856] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f41ee548-273a-4dd1-a197-a877ac7fd0e5, expireMs=1705931800043] kafka | [2024-01-22 13:55:49,856] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping listener kafka | [2024-01-22 13:55:49,857] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-pap | [2024-01-22T13:56:10.135+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopped kafka | [2024-01-22 13:55:49,857] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.138+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-22 13:55:49,865] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f41ee548-273a-4dd1-a197-a877ac7fd0e5","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"29114705-8887-4977-854d-e6da3f475b73","timestampMs":1705931770095,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:49,867] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-pap | [2024-01-22T13:56:10.138+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f41ee548-273a-4dd1-a197-a877ac7fd0e5 kafka | [2024-01-22 13:55:49,867] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.148+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate successful kafka | [2024-01-22 13:55:49,867] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.148+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 start publishing next request kafka | [2024-01-22 13:55:49,868] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange starting kafka | [2024-01-22 13:55:49,876] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-pap | [2024-01-22T13:56:10.148+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange starting listener kafka | [2024-01-22 13:55:49,877] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange starting timer kafka | [2024-01-22 13:55:49,877] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=1e11feec-3c6a-4861-a178-a1d471866c80, expireMs=1705931800149] kafka | [2024-01-22 13:55:49,877] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange starting enqueue kafka | [2024-01-22 13:55:49,877] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-01-22 13:55:49,888] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange started policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,889] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=1e11feec-3c6a-4861-a178-a1d471866c80, expireMs=1705931800149] policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-01-22 13:55:49,889] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:56:10.149+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,889] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1e11feec-3c6a-4861-a178-a1d471866c80","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:49,890] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:56:10.160+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | kafka | [2024-01-22 13:55:49,896] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1e11feec-3c6a-4861-a178-a1d471866c80","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-01-22 13:55:49,896] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:56:10.160+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-01-22 13:55:49,896] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:56:10.174+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1e11feec-3c6a-4861-a178-a1d471866c80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"029c38c3-9b61-40d5-84a1-9b0554ef65e1","timestampMs":1705931770162,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:49,897] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.175+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 1e11feec-3c6a-4861-a178-a1d471866c80 kafka | [2024-01-22 13:55:49,897] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.199+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-22 13:55:49,909] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1e11feec-3c6a-4861-a178-a1d471866c80","timestampMs":1705931770028,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:49,910] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.199+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-01-22 13:55:49,910] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | [2024-01-22T13:56:10.204+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-22 13:55:49,911] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1e11feec-3c6a-4861-a178-a1d471866c80","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"029c38c3-9b61-40d5-84a1-9b0554ef65e1","timestampMs":1705931770162,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:49,911] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopping kafka | [2024-01-22 13:55:50,163] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopping enqueue kafka | [2024-01-22 13:55:50,164] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopping timer kafka | [2024-01-22 13:55:50,164] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=1e11feec-3c6a-4861-a178-a1d471866c80, expireMs=1705931800149] kafka | [2024-01-22 13:55:50,165] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopping listener kafka | [2024-01-22 13:55:50,165] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange stopped kafka | [2024-01-22 13:55:50,175] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpStateChange successful kafka | [2024-01-22 13:55:50,176] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 start publishing next request kafka | [2024-01-22 13:55:50,176] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting kafka | [2024-01-22 13:55:50,176] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting listener kafka | [2024-01-22 13:55:50,176] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting timer kafka | [2024-01-22 13:55:50,187] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=8cc3dde7-8a50-459b-a008-976e7631331f, expireMs=1705931800205] kafka | [2024-01-22 13:55:50,189] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0100-upgrade.sql policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate starting enqueue kafka | [2024-01-22 13:55:50,189] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate started kafka | [2024-01-22 13:55:50,190] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-pap | [2024-01-22T13:56:10.205+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-01-22 13:55:50,190] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8cc3dde7-8a50-459b-a008-976e7631331f","timestampMs":1705931770192,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:50,205] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.215+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-22 13:55:50,206] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | msg policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8cc3dde7-8a50-459b-a008-976e7631331f","timestampMs":1705931770192,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:50,206] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-db-migrator | upgrade to 1100 completed policy-pap | [2024-01-22T13:56:10.215+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-01-22 13:55:50,206] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.216+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-22 13:55:50,206] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-pap | {"source":"pap-62c94b3b-66f9-4964-a14e-729bc5920807","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"8cc3dde7-8a50-459b-a008-976e7631331f","timestampMs":1705931770192,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:50,218] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.216+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-01-22 13:55:50,219] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-pap | [2024-01-22T13:56:10.226+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-22 13:55:50,219] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8cc3dde7-8a50-459b-a008-976e7631331f","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"49cfa103-b65d-4118-9a66-07d688cd7200","timestampMs":1705931770218,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-22 13:55:50,219] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-22 13:55:50,219] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8cc3dde7-8a50-459b-a008-976e7631331f","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"49cfa103-b65d-4118-9a66-07d688cd7200","timestampMs":1705931770218,"name":"apex-e44b8da2-bb64-414a-9066-32eb9577eb32","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-01-22 13:55:50,226] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:50,227] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-01-22 13:55:50,227] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping enqueue kafka | [2024-01-22 13:55:50,227] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping timer kafka | [2024-01-22 13:55:50,227] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8cc3dde7-8a50-459b-a008-976e7631331f, expireMs=1705931800205] policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-01-22 13:55:50,236] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopping listener kafka | [2024-01-22 13:55:50,236] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate stopped kafka | [2024-01-22 13:55:50,236] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:10.227+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8cc3dde7-8a50-459b-a008-976e7631331f kafka | [2024-01-22 13:55:50,236] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-pap | [2024-01-22T13:56:10.231+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 PdpUpdate successful kafka | [2024-01-22 13:55:50,236] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:10.231+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-e44b8da2-bb64-414a-9066-32eb9577eb32 has no more requests kafka | [2024-01-22 13:55:50,244] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-01-22T13:56:15.766+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-01-22 13:55:50,244] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:15.775+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-01-22 13:55:50,245] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:16.175+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup kafka | [2024-01-22 13:55:50,245] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:16.735+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup kafka | [2024-01-22 13:55:50,245] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-pap | [2024-01-22T13:56:16.736+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup kafka | [2024-01-22 13:55:50,254] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:17.261+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-01-22 13:55:50,255] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-22T13:56:17.574+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-01-22 13:55:50,255] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:17.674+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-01-22 13:55:50,255] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-pap | [2024-01-22T13:56:17.674+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup kafka | [2024-01-22 13:55:50,255] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:17.675+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup kafka | [2024-01-22 13:55:50,267] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-01-22T13:56:17.688+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-22T13:56:17Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-22T13:56:17Z, user=policyadmin)] kafka | [2024-01-22 13:55:50,268] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:18.378+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup kafka | [2024-01-22 13:55:50,268] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-22T13:56:18.379+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-01-22 13:55:50,268] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:18.379+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 kafka | [2024-01-22 13:55:50,268] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | [2024-01-22T13:56:18.379+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup kafka | [2024-01-22 13:55:50,275] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:18.379+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup kafka | [2024-01-22 13:55:50,276] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-22T13:56:18.391+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-22T13:56:18Z, user=policyadmin)] kafka | [2024-01-22 13:55:50,276] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group defaultGroup kafka | [2024-01-22 13:55:50,276] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | TRUNCATE TABLE sequence policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup kafka | [2024-01-22 13:55:50,276] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 kafka | [2024-01-22 13:55:50,282] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-db-migrator | policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-01-22 13:55:50,282] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup kafka | [2024-01-22 13:55:50,283] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:18.771+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup kafka | [2024-01-22 13:55:50,283] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-pap | [2024-01-22T13:56:18.783+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-22T13:56:18Z, user=policyadmin)] kafka | [2024-01-22 13:55:50,283] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:39.341+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-01-22 13:55:50,290] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-22T13:56:39.343+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup kafka | [2024-01-22 13:55:50,292] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-22T13:56:40.043+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f41ee548-273a-4dd1-a197-a877ac7fd0e5, expireMs=1705931800043] kafka | [2024-01-22 13:55:50,292] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE pdpstatistics policy-pap | [2024-01-22T13:56:40.149+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=1e11feec-3c6a-4861-a178-a1d471866c80, expireMs=1705931800149] kafka | [2024-01-22 13:55:50,292] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:50,293] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | kafka | [2024-01-22 13:55:50,314] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 13:55:50,315] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql kafka | [2024-01-22 13:55:50,315] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:50,315] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-01-22 13:55:50,315] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:50,381] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-01-22 13:55:50,381] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 13:55:50,381] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0120-statistics_sequence.sql kafka | [2024-01-22 13:55:50,381] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:50,382] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE statistics_sequence kafka | [2024-01-22 13:55:50,410] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- kafka | [2024-01-22 13:55:50,410] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | kafka | [2024-01-22 13:55:50,410] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) policy-db-migrator | policyadmin: OK: upgrade (1300) kafka | [2024-01-22 13:55:50,410] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | name version kafka | [2024-01-22 13:55:50,411] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policyadmin 1300 kafka | [2024-01-22 13:55:50,416] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ID script operation from_version to_version tag success atTime kafka | [2024-01-22 13:55:50,416] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,416] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,417] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,417] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,423] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,424] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,424] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,424] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,424] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,428] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,429] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,429] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:15 kafka | [2024-01-22 13:55:50,429] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,429] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Zh415qvLQvmHe6oa34REOg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,432] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:16 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,433] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,434] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:17 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:18 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:19 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:20 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:21 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2201241355150800u 1 2024-01-22 13:55:21 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2201241355150900u 1 2024-01-22 13:55:21 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:21 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:22 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:22 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2201241355151000u 1 2024-01-22 13:55:22 kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-01-22 13:55:50,435] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-01-22 13:55:50,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,449] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,449] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,450] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,450] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,451] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,451] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,452] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,452] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,452] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,452] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2201241355151100u 1 2024-01-22 13:55:22 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2201241355151200u 1 2024-01-22 13:55:22 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2201241355151200u 1 2024-01-22 13:55:22 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2201241355151200u 1 2024-01-22 13:55:22 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2201241355151200u 1 2024-01-22 13:55:22 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2201241355151300u 1 2024-01-22 13:55:22 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2201241355151300u 1 2024-01-22 13:55:22 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2201241355151300u 1 2024-01-22 13:55:22 policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-01-22 13:55:50,452] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,452] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,452] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,452] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,453] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,453] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 15 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,455] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,455] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,455] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,455] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 6 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,456] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,456] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,457] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,457] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,457] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,458] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,458] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,458] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,459] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,460] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,461] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,461] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,461] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,461] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,462] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,463] INFO [Broker id=1] Finished LeaderAndIsr request in 1165ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,463] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-22 13:55:50,465] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Zh415qvLQvmHe6oa34REOg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,468] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,469] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,470] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,470] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,470] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) kafka | [2024-01-22 13:55:50,473] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-22 13:55:50,509] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,519] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 79c954dd-4645-472b-b928-ee2d4186f7c1 in Empty state. Created a new member id consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,529] INFO [GroupCoordinator 1]: Preparing to rebalance group 79c954dd-4645-472b-b928-ee2d4186f7c1 in state PreparingRebalance with old generation 0 (__consumer_offsets-26) (reason: Adding new member consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,530] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,581] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group e65163a7-0954-4bf8-9924-8c41fa40f9af in Empty state. Created a new member id consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:50,585] INFO [GroupCoordinator 1]: Preparing to rebalance group e65163a7-0954-4bf8-9924-8c41fa40f9af in state PreparingRebalance with old generation 0 (__consumer_offsets-41) (reason: Adding new member consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:53,541] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:53,546] INFO [GroupCoordinator 1]: Stabilized group 79c954dd-4645-472b-b928-ee2d4186f7c1 generation 1 (__consumer_offsets-26) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:53,570] INFO [GroupCoordinator 1]: Assignment received from leader consumer-79c954dd-4645-472b-b928-ee2d4186f7c1-3-90b7f53c-d18d-432e-90f6-302f414d9a2e for group 79c954dd-4645-472b-b928-ee2d4186f7c1 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:53,570] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-e9c969ef-6ee6-4673-ad81-f22b62d5e7d7 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:53,592] INFO [GroupCoordinator 1]: Stabilized group e65163a7-0954-4bf8-9924-8c41fa40f9af generation 1 (__consumer_offsets-41) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-22 13:55:53,608] INFO [GroupCoordinator 1]: Assignment received from leader consumer-e65163a7-0954-4bf8-9924-8c41fa40f9af-2-bd794398-55d4-4516-bbe7-133fbc5867a3 for group e65163a7-0954-4bf8-9924-8c41fa40f9af for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping policy-api ... Stopping kafka ... Stopping grafana ... Stopping mariadb ... Stopping prometheus ... Stopping simulator ... Stopping compose_zookeeper_1 ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing policy-api ... Removing kafka ... Removing policy-db-migrator ... Removing grafana ... Removing mariadb ... Removing prometheus ... Removing simulator ... Removing compose_zookeeper_1 ... Removing policy-apex-pdp ... done Removing policy-pap ... done Removing grafana ... done Removing kafka ... done Removing simulator ... done Removing policy-api ... done Removing mariadb ... done Removing prometheus ... done Removing policy-db-migrator ... done Removing compose_zookeeper_1 ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.vXEmLvt3D4 ]] + rsync -av /tmp/tmp.vXEmLvt3D4/ /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 910,840 bytes received 95 bytes 1,821,870.00 bytes/sec total size is 910,293 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2142 killed; [ssh-agent] Stopped. Robot results publisher started... -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins12624779721734675757.sh ---> sysstat.sh [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins11906087679230808903.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-verify-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins12204705917561419458.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-l0SW from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins7571175974915932427.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/config1742631325063875622tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins5929045176702373030.sh ---> create-netrc.sh [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins11283686590870921537.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-l0SW from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins4327540658111996079.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins16741567697499941198.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-l0SW from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-verify-pap] $ /bin/bash -l /tmp/jenkins478837406985277266.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-l0SW from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-l0SW/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-verify-pap/504 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-14213 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 864 24831 0 6471 30846 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:82:5e:43 brd ff:ff:ff:ff:ff:ff inet 10.30.106.203/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85856sec preferred_lft 85856sec inet6 fe80::f816:3eff:fe82:5e43/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:8e:3d:20:ac brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14213) 01/22/24 _x86_64_ (8 CPU) 13:50:08 LINUX RESTART (8 CPU) 13:51:01 tps rtps wtps bread/s bwrtn/s 13:52:01 84.00 17.58 66.42 1020.10 24283.95 13:53:01 119.25 13.81 105.43 1118.08 32574.70 13:54:01 136.59 9.30 127.30 1653.86 66773.80 13:55:01 142.26 0.08 142.18 5.07 98718.35 13:56:01 312.11 14.20 297.92 765.54 36203.25 13:57:01 20.40 0.00 20.40 0.00 20855.96 13:58:01 28.21 0.05 28.16 10.66 21839.96 13:59:01 74.58 1.93 72.64 111.96 10231.37 Average: 114.67 7.12 107.55 585.65 38934.57 13:51:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 13:52:01 30147712 31712548 2791508 8.47 66568 1810176 1403648 4.13 856312 1645796 149616 13:53:01 29809112 31705804 3130108 9.50 85428 2106372 1395464 4.11 871720 1932292 150628 13:54:01 27194084 31650836 5745136 17.44 127800 4516200 1423944 4.19 1022104 4253468 1064928 13:55:01 25506560 31650528 7432660 22.56 140824 6122580 1511368 4.45 1037636 5858688 130060 13:56:01 23276472 29605044 9662748 29.34 156272 6271740 8859488 26.07 3257624 5789316 1352 13:57:01 23318964 29648248 9620256 29.21 156476 6272016 8793908 25.87 3216756 5786772 208 13:58:01 23552240 29907080 9386980 28.50 156880 6300196 7238800 21.30 2980640 5800952 200 13:59:01 25402484 31561184 7536736 22.88 160996 6115276 1565520 4.61 1337196 5647320 32196 Average: 26025954 30930159 6913266 20.99 131406 4939320 4024018 11.84 1822498 4589326 191148 13:51:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 13:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:52:01 lo 1.13 1.13 0.12 0.12 0.00 0.00 0.00 0.00 13:52:01 ens3 51.46 36.99 791.65 6.38 0.00 0.00 0.00 0.00 13:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:53:01 lo 1.40 1.40 0.14 0.14 0.00 0.00 0.00 0.00 13:53:01 ens3 61.36 46.64 871.59 8.46 0.00 0.00 0.00 0.00 13:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:54:01 lo 8.33 8.33 0.81 0.81 0.00 0.00 0.00 0.00 13:54:01 ens3 680.72 379.49 17418.18 28.40 0.00 0.00 0.00 0.00 13:54:01 br-30abeaa9709f 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:55:01 lo 4.13 4.13 0.38 0.38 0.00 0.00 0.00 0.00 13:55:01 ens3 500.45 267.32 16093.65 19.42 0.00 0.00 0.00 0.00 13:55:01 br-30abeaa9709f 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:56:01 veth61bc196 5.05 6.50 0.81 0.92 0.00 0.00 0.00 0.00 13:56:01 veth64d3651 0.55 0.93 0.06 0.32 0.00 0.00 0.00 0.00 13:56:01 veth66d2419 0.33 0.73 0.03 0.65 0.00 0.00 0.00 0.00 13:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:57:01 veth61bc196 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 13:57:01 veth64d3651 0.27 0.22 0.02 0.01 0.00 0.00 0.00 0.00 13:57:01 veth66d2419 0.53 0.53 0.05 1.51 0.00 0.00 0.00 0.00 13:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:58:01 veth61bc196 0.17 0.48 0.01 0.03 0.00 0.00 0.00 0.00 13:58:01 vetha86a375 53.97 48.13 21.03 40.49 0.00 0.00 0.00 0.00 13:58:01 veth571a967 0.00 0.58 0.00 0.03 0.00 0.00 0.00 0.00 13:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 13:59:01 lo 35.72 35.72 6.25 6.25 0.00 0.00 0.00 0.00 13:59:01 ens3 1723.59 1032.07 36087.21 158.65 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: lo 3.97 3.97 0.74 0.74 0.00 0.00 0.00 0.00 Average: ens3 171.88 99.74 4412.81 12.66 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14213) 01/22/24 _x86_64_ (8 CPU) 13:50:08 LINUX RESTART (8 CPU) 13:51:01 CPU %user %nice %system %iowait %steal %idle 13:52:01 all 7.44 0.00 0.57 6.14 0.04 85.81 13:52:01 0 18.13 0.00 0.90 13.77 0.05 67.15 13:52:01 1 29.05 0.00 1.82 2.56 0.13 66.43 13:52:01 2 2.49 0.00 0.30 0.30 0.05 96.86 13:52:01 3 4.99 0.00 0.33 0.37 0.02 94.29 13:52:01 4 0.80 0.00 0.35 0.57 0.00 98.28 13:52:01 5 0.52 0.00 0.28 2.55 0.00 96.65 13:52:01 6 0.62 0.00 0.32 28.70 0.03 70.33 13:52:01 7 3.00 0.00 0.30 0.33 0.02 96.35 13:53:01 all 9.75 0.00 0.71 4.15 0.03 85.35 13:53:01 0 0.30 0.00 0.33 22.41 0.02 76.94 13:53:01 1 5.62 0.00 0.58 1.67 0.02 92.12 13:53:01 2 31.39 0.00 1.87 2.83 0.07 63.85 13:53:01 3 14.58 0.00 1.02 0.27 0.03 84.10 13:53:01 4 2.75 0.00 0.42 1.64 0.05 95.14 13:53:01 5 12.41 0.00 0.42 1.02 0.03 86.12 13:53:01 6 4.13 0.00 0.57 2.78 0.02 92.50 13:53:01 7 6.91 0.00 0.43 0.62 0.02 92.02 13:54:01 all 9.09 0.00 3.60 10.41 0.07 76.83 13:54:01 0 8.60 0.00 3.77 11.03 0.08 76.51 13:54:01 1 7.22 0.00 4.19 18.84 0.05 69.69 13:54:01 2 8.93 0.00 2.95 15.82 0.05 72.25 13:54:01 3 8.54 0.00 3.71 27.39 0.05 60.31 13:54:01 4 10.72 0.00 3.91 1.93 0.08 83.36 13:54:01 5 9.13 0.00 4.40 0.70 0.07 85.71 13:54:01 6 8.38 0.00 2.53 6.95 0.07 82.07 13:54:01 7 11.17 0.00 3.38 0.61 0.08 84.76 13:55:01 all 5.81 0.00 2.47 11.66 0.04 80.02 13:55:01 0 5.70 0.00 2.93 2.30 0.05 89.02 13:55:01 1 5.99 0.00 2.52 42.35 0.05 49.09 13:55:01 2 3.25 0.00 2.51 10.22 0.03 83.99 13:55:01 3 6.32 0.00 2.75 7.31 0.05 83.58 13:55:01 4 6.78 0.00 2.60 6.78 0.03 83.81 13:55:01 5 9.26 0.00 1.48 0.08 0.03 89.14 13:55:01 6 4.25 0.00 1.85 24.10 0.05 69.75 13:55:01 7 4.91 0.00 3.09 0.35 0.03 91.61 13:56:01 all 28.36 0.00 3.93 4.13 0.11 63.47 13:56:01 0 35.02 0.00 4.84 0.24 0.10 59.81 13:56:01 1 33.77 0.00 4.37 5.45 0.12 56.29 13:56:01 2 22.89 0.00 3.92 1.48 0.12 71.59 13:56:01 3 26.79 0.00 4.05 10.40 0.14 58.62 13:56:01 4 23.71 0.00 3.29 12.01 0.12 60.88 13:56:01 5 31.64 0.00 4.11 1.38 0.12 62.75 13:56:01 6 23.43 0.00 3.50 0.71 0.12 72.24 13:56:01 7 29.63 0.00 3.40 1.37 0.10 65.50 13:57:01 all 4.68 0.00 0.51 1.60 0.05 93.16 13:57:01 0 3.44 0.00 0.68 0.02 0.08 95.78 13:57:01 1 4.71 0.00 0.37 0.02 0.05 94.86 13:57:01 2 5.88 0.00 0.64 0.12 0.03 93.33 13:57:01 3 4.98 0.00 0.69 0.10 0.05 94.18 13:57:01 4 6.37 0.00 0.54 12.46 0.05 80.59 13:57:01 5 4.66 0.00 0.47 0.00 0.05 94.82 13:57:01 6 3.17 0.00 0.33 0.10 0.03 96.36 13:57:01 7 4.22 0.00 0.33 0.00 0.03 95.41 13:58:01 all 1.57 0.00 0.37 1.67 0.04 96.34 13:58:01 0 1.69 0.00 0.45 0.00 0.07 97.79 13:58:01 1 3.04 0.00 0.35 0.02 0.03 96.57 13:58:01 2 1.39 0.00 0.38 0.10 0.05 98.08 13:58:01 3 2.31 0.00 0.47 0.10 0.03 97.09 13:58:01 4 0.75 0.00 0.38 13.03 0.03 85.80 13:58:01 5 1.13 0.00 0.40 0.02 0.03 98.42 13:58:01 6 0.85 0.00 0.32 0.00 0.03 98.80 13:58:01 7 1.42 0.00 0.23 0.15 0.03 98.16 13:59:01 all 8.24 0.00 0.70 1.07 0.03 89.94 13:59:01 0 5.04 0.00 0.75 0.05 0.03 94.13 13:59:01 1 12.78 0.00 0.72 0.59 0.03 85.89 13:59:01 2 3.22 0.00 0.52 0.25 0.02 96.00 13:59:01 3 0.73 0.00 0.43 0.13 0.02 98.68 13:59:01 4 4.69 0.00 0.73 6.09 0.03 88.45 13:59:01 5 24.92 0.00 1.30 0.43 0.07 73.28 13:59:01 6 0.99 0.00 0.53 0.87 0.05 97.56 13:59:01 7 13.59 0.00 0.65 0.20 0.03 85.53 Average: all 9.35 0.00 1.60 5.09 0.05 83.90 Average: 0 9.72 0.00 1.83 6.23 0.06 82.17 Average: 1 12.74 0.00 1.86 8.87 0.06 76.48 Average: 2 9.92 0.00 1.63 3.87 0.05 84.53 Average: 3 8.63 0.00 1.67 5.72 0.05 83.92 Average: 4 7.05 0.00 1.52 6.81 0.05 84.57 Average: 5 11.70 0.00 1.60 0.77 0.05 85.88 Average: 6 5.71 0.00 1.24 8.01 0.05 84.99 Average: 7 9.33 0.00 1.47 0.45 0.04 88.70