Started by upstream project "policy-docker-master-merge-java" build number 333 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137060 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-14552 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-2G3xVJK9wIwP/agent.2121 SSH_AGENT_PID=2123 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_2662552322566714215.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_2662552322566714215.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 31c61d495474985b8cc3460464f888651d0919ed (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 31c61d495474985b8cc3460464f888651d0919ed # timeout=30 Commit message: "Add kafka support in K8s CSIT" > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17407350503572915132.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-dyeB lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH Generating Requirements File Python 3.10.6 pip 23.3.2 from /tmp/venv-dyeB/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.1 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.25 botocore==1.34.25 bs4==0.0.2 cachetools==5.3.2 certifi==2023.11.17 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.5.0 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.26.2 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.4 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.1.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.1.1 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2023.3.post1 PyYAML==6.0.1 referencing==0.32.1 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins15193114834158797893.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins11297563122817450193.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.yejbioFAjC ++ echo ROBOT_VENV=/tmp/tmp.yejbioFAjC +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.yejbioFAjC ++ source /tmp/tmp.yejbioFAjC/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.yejbioFAjC +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.yejbioFAjC/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.yejbioFAjC) ' '!=' x ']' +++ PS1='(tmp.yejbioFAjC) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.yejbioFAjC/src/onap ++ rm -rf /tmp/tmp.yejbioFAjC/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.yejbioFAjC/bin/activate + '[' -z /tmp/tmp.yejbioFAjC/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.yejbioFAjC/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.yejbioFAjC ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.yejbioFAjC/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.yejbioFAjC) ' ++ '[' 'x(tmp.yejbioFAjC) ' '!=' x ']' ++ PS1='(tmp.yejbioFAjC) (tmp.yejbioFAjC) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.T4ASB2z6Jw + cd /tmp/tmp.T4ASB2z6Jw + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:611206351f1d7f71f498112d482be2423c80b29c75cff0383910ee3a4330e7b5 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:8a0432281bb5edb6d25e3d0e62d78b6aebc2875f52ecd11259251b497208c04e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Creating simulator ... Creating compose_zookeeper_1 ... Creating mariadb ... Creating prometheus ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating prometheus ... done Creating grafana ... Creating policy-api ... done Creating simulator ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating kafka ... done Creating policy-pap ... Creating grafana ... done Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds kafka Up 13 seconds grafana Up 12 seconds policy-api Up 15 seconds prometheus Up 16 seconds compose_zookeeper_1 Up 14 seconds mariadb Up 17 seconds simulator Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds kafka Up 18 seconds grafana Up 17 seconds policy-api Up 20 seconds prometheus Up 21 seconds compose_zookeeper_1 Up 19 seconds mariadb Up 22 seconds simulator Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds kafka Up 23 seconds grafana Up 22 seconds policy-api Up 25 seconds prometheus Up 26 seconds compose_zookeeper_1 Up 24 seconds mariadb Up 27 seconds simulator Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds kafka Up 28 seconds grafana Up 27 seconds policy-api Up 30 seconds prometheus Up 31 seconds compose_zookeeper_1 Up 29 seconds mariadb Up 32 seconds simulator Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 33 seconds grafana Up 32 seconds policy-api Up 35 seconds prometheus Up 36 seconds compose_zookeeper_1 Up 34 seconds mariadb Up 37 seconds simulator Up 35 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds kafka Up 38 seconds grafana Up 37 seconds policy-api Up 40 seconds prometheus Up 41 seconds compose_zookeeper_1 Up 39 seconds mariadb Up 42 seconds simulator Up 40 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 12:00:17 up 5 min, 0 users, load average: 3.17, 1.48, 0.61 Tasks: 209 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 12.1 us, 2.5 sy, 0.0 ni, 79.2 id, 6.0 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.7G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds kafka Up 38 seconds grafana Up 37 seconds policy-api Up 41 seconds prometheus Up 41 seconds compose_zookeeper_1 Up 40 seconds mariadb Up 43 seconds simulator Up 40 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 45362a277fa0 policy-apex-pdp 1.93% 173.4MiB / 31.41GiB 0.54% 9.4kB / 8.72kB 0B / 0B 48 4d217cf2a7c4 policy-pap 13.16% 496.9MiB / 31.41GiB 1.54% 29.9kB / 32.5kB 0B / 180MB 62 451f6c43c6af kafka 9.94% 377.5MiB / 31.41GiB 1.17% 74.4kB / 77.1kB 0B / 508kB 83 7493f10c2d01 grafana 0.03% 54.32MiB / 31.41GiB 0.17% 19.1kB / 3.36kB 0B / 23.9MB 16 aff50ba937b3 policy-api 0.13% 515.2MiB / 31.41GiB 1.60% 1e+03kB / 710kB 0B / 0B 53 88819eafa12d prometheus 0.00% 17.96MiB / 31.41GiB 0.06% 1.55kB / 316B 0B / 0B 11 c66458784174 compose_zookeeper_1 0.12% 98.55MiB / 31.41GiB 0.31% 56.8kB / 50.5kB 127kB / 406kB 60 84ec1f810d50 mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 68.2MB 37 20d1574c8bca simulator 0.08% 123.9MiB / 31.41GiB 0.39% 1.23kB / 0B 0B / 0B 76 + echo + cd /tmp/tmp.T4ASB2z6Jw + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.T4ASB2z6Jw/output.xml Log: /tmp/tmp.T4ASB2z6Jw/log.html Report: /tmp/tmp.T4ASB2z6Jw/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes prometheus Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 12:02:07 up 7 min, 0 users, load average: 0.68, 1.14, 0.59 Tasks: 198 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.1 us, 2.0 sy, 0.0 ni, 83.2 id, 4.7 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.7G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes prometheus Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 45362a277fa0 policy-apex-pdp 0.42% 185.1MiB / 31.41GiB 0.58% 57.2kB / 91.8kB 0B / 0B 50 4d217cf2a7c4 policy-pap 0.56% 501.8MiB / 31.41GiB 1.56% 2.33MB / 811kB 0B / 180MB 65 451f6c43c6af kafka 1.27% 384.4MiB / 31.41GiB 1.19% 243kB / 218kB 0B / 606kB 83 7493f10c2d01 grafana 0.03% 56.5MiB / 31.41GiB 0.18% 20.1kB / 4.49kB 0B / 23.9MB 16 aff50ba937b3 policy-api 0.10% 516.8MiB / 31.41GiB 1.61% 2.49MB / 1.26MB 0B / 0B 54 88819eafa12d prometheus 0.00% 24.14MiB / 31.41GiB 0.08% 184kB / 10.9kB 0B / 0B 11 c66458784174 compose_zookeeper_1 0.19% 99MiB / 31.41GiB 0.31% 59.8kB / 52.1kB 127kB / 406kB 60 84ec1f810d50 mariadb 0.01% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.5MB 28 20d1574c8bca simulator 0.06% 123.8MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 76 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, prometheus, compose_zookeeper_1, mariadb, simulator mariadb | 2024-01-23 11:59:34+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-23 11:59:34+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-01-23 11:59:34+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-23 11:59:34+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-01-23 11:59:34 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-23 11:59:34 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-23 11:59:34 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-01-23 11:59:36+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-01-23 11:59:36+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-01-23 11:59:36+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-01-23 11:59:36 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 93 ... mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-23 11:59:36 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-23 11:59:36 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-23 11:59:36 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Completed initialization of buffer pool policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-23 11:59:36 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-01-23 11:59:36 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-23 11:59:36 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-23 11:59:36 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-23 11:59:36 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-23 11:59:36 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-01-23 11:59:37+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-01-23 11:59:38+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-01-23 11:59:38+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-01-23 11:59:39+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-01-23 11:59:39+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-01-23 11:59:40,867] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,880] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,880] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,880] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,880] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,882] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-23 11:59:40,882] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-23 11:59:40,882] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-23 11:59:40,882] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-01-23 11:59:40,884] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-01-23 11:59:40,884] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,884] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,884] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,884] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,884] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-23 11:59:40,885] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-01-23 11:59:40,896] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-01-23 11:59:40,898] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-23 11:59:40,899] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-23 11:59:40,901] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-23 11:59:40,911] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,911] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,911] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,911] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,911] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,911] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,911] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | kafka (172.17.0.9:9092) open policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-01-23T12:00:14.252+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-01-23T12:00:14.427+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-5e219e28-7118-417e-b91d-edf2321c7473-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 5e219e28-7118-417e-b91d-edf2321c7473 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null grafana | logger=settings t=2024-01-23T11:59:39.331008745Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z grafana | logger=settings t=2024-01-23T11:59:39.331371303Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-01-23T11:59:39.331429446Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-01-23T11:59:39.331454648Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-01-23T11:59:39.33150014Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-01-23T11:59:39.331536452Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-23T11:59:39.331598725Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-23T11:59:39.331624416Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-01-23T11:59:39.331667438Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-01-23T11:59:39.331723001Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-01-23T11:59:39.331749783Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-23T11:59:39.331836027Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-23T11:59:39.331861668Z level=info msg=Target target=[all] grafana | logger=settings t=2024-01-23T11:59:39.331953403Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-01-23T11:59:39.331989325Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-01-23T11:59:39.332062728Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-01-23T11:59:39.33208993Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-01-23T11:59:39.332161604Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-01-23T11:59:39.332236827Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-01-23T11:59:39.332656809Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-01-23T11:59:39.332723912Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-01-23T11:59:39.333613857Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-01-23T11:59:39.334824869Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-01-23T11:59:39.335677232Z level=info msg="Migration successfully executed" id="create migration_log table" duration=851.883µs grafana | logger=migrator t=2024-01-23T11:59:39.393940118Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-01-23T11:59:39.395270846Z level=info msg="Migration successfully executed" id="create user table" duration=1.330058ms grafana | logger=migrator t=2024-01-23T11:59:39.400374736Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-01-23T11:59:39.401219659Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=868.704µs mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-01-23 11:59:39+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-01-23 11:59:39 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-01-23 11:59:39 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-01-23 11:59:39 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-01-23 11:59:39 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-01-23 11:59:39 0 [Note] InnoDB: Buffer pool(s) dump completed at 240123 11:59:39 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Shutdown completed; log sequence number 330365; transaction id 298 mariadb | 2024-01-23 11:59:40 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-01-23 11:59:40+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-01-23 11:59:40+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-01-23 11:59:40 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-23 11:59:40 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-23 11:59:40 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-23 11:59:40 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: log sequence number 330365; transaction id 299 mariadb | 2024-01-23 11:59:40 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-01-23 11:59:40 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-23 11:59:40 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-01-23 11:59:40 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-01-23 11:59:40 0 [Note] Server socket created on IP: '::'. mariadb | 2024-01-23 11:59:40 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-01-23 11:59:40 0 [Note] InnoDB: Buffer pool(s) load completed at 240123 11:59:40 mariadb | 2024-01-23 11:59:40 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-01-23 11:59:40 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-01-23 11:59:41 25 [Warning] Aborted connection 25 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-01-23 11:59:41 32 [Warning] Aborted connection 32 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-01-23T12:00:14.573+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-23T12:00:14.573+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-23T12:00:14.573+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011214572 policy-apex-pdp | [2024-01-23T12:00:14.576+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-1, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-23T12:00:14.589+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-01-23T12:00:14.589+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-01-23T12:00:14.595+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5e219e28-7118-417e-b91d-edf2321c7473, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-01-23T12:00:14.624+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-5e219e28-7118-417e-b91d-edf2321c7473-2 policy-apex-pdp | client.rack = kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-01-23 11:59:42,081] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,081] INFO Client environment:host.name=451f6c43c6af (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,081] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,082] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,085] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-23 11:59:42,093] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-23 11:59:42,100] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-01-23 11:59:40,911] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,911] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,911] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:host.name=c66458784174 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) policy-api | :: Spring Boot :: (v3.1.4) policy-api | policy-api | [2024-01-23T11:59:50.405+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-01-23T11:59:50.406+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-01-23T11:59:52.196+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-01-23T11:59:52.286+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 80 ms. Found 6 JPA repository interfaces. policy-api | [2024-01-23T11:59:52.694+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-23T11:59:52.695+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-23T11:59:53.355+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-01-23T11:59:53.364+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-23T11:59:53.366+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-01-23T11:59:53.366+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-api | [2024-01-23T11:59:53.452+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-01-23T11:59:53.452+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2978 ms policy-api | [2024-01-23T11:59:53.902+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-01-23T11:59:53.984+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-01-23T11:59:53.987+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-01-23T11:59:54.035+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-01-23T11:59:54.404+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-01-23T11:59:54.432+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-01-23T11:59:54.548+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@3d37203b policy-api | [2024-01-23T11:59:54.550+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-01-23T11:59:54.577+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-api | [2024-01-23T11:59:54.578+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-api | [2024-01-23T11:59:56.385+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-01-23T11:59:56.389+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-01-23T11:59:57.679+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-01-23T11:59:58.448+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-01-23T11:59:59.624+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-01-23T11:59:59.839+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@3005133e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@19a7e618, org.springframework.security.web.context.SecurityContextHolderFilter@2542d320, org.springframework.security.web.header.HeaderWriterFilter@39d666e0, org.springframework.security.web.authentication.logout.LogoutFilter@4295b0b8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4bbb00a4, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@66161fee, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@67127bb1, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@22ccd80f, org.springframework.security.web.access.ExceptionTranslationFilter@5f160f9c, org.springframework.security.web.access.intercept.AuthorizationFilter@6f3a8d5e] policy-api | [2024-01-23T12:00:00.690+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-01-23T12:00:00.747+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-23T12:00:00.770+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-01-23T12:00:00.791+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.209 seconds (process running for 11.824) policy-api | [2024-01-23T12:00:20.145+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-01-23T12:00:20.145+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-01-23T12:00:20.147+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-01-23T12:00:20.412+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] kafka | [2024-01-23 11:59:42,126] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-23 11:59:42,126] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-23 11:59:42,134] INFO Socket connection established, initiating session, client: /172.17.0.9:56920, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-23 11:59:42,165] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000043fa10000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-23 11:59:42,286] INFO Session: 0x10000043fa10000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:42,287] INFO EventThread shut down for session: 0x10000043fa10000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-01-23 11:59:43,006] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-01-23 11:59:43,333] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-23 11:59:43,403] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-01-23 11:59:43,404] INFO starting (kafka.server.KafkaServer) kafka | [2024-01-23 11:59:43,405] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-01-23 11:59:43,419] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-23 11:59:43,424] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:host.name=451f6c43c6af (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 5e219e28-7118-417e-b91d-edf2321c7473 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-23T11:59:39.408868738Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-01-23T11:59:39.410018506Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.148508ms grafana | logger=migrator t=2024-01-23T11:59:39.415901786Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-01-23T11:59:39.417063945Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.162129ms grafana | logger=migrator t=2024-01-23T11:59:39.423942975Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-01-23T11:59:39.42542083Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.478025ms grafana | logger=migrator t=2024-01-23T11:59:39.431059607Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-01-23T11:59:39.434239589Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.179602ms grafana | logger=migrator t=2024-01-23T11:59:39.440754361Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-01-23T11:59:39.441620795Z level=info msg="Migration successfully executed" id="create user table v2" duration=866.414µs grafana | logger=migrator t=2024-01-23T11:59:39.44702511Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-01-23T11:59:39.448307995Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.282735ms grafana | logger=migrator t=2024-01-23T11:59:39.453527281Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-01-23T11:59:39.4548894Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.361909ms grafana | logger=migrator t=2024-01-23T11:59:39.460202891Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-01-23T11:59:39.460717197Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=514.386µs grafana | logger=migrator t=2024-01-23T11:59:39.465243327Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-01-23T11:59:39.466213217Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=969.4µs grafana | logger=migrator t=2024-01-23T11:59:39.471437973Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-01-23T11:59:39.473439225Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.994792ms grafana | logger=migrator t=2024-01-23T11:59:39.476853938Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-01-23T11:59:39.476989925Z level=info msg="Migration successfully executed" id="Update user table charset" duration=135.557µs grafana | logger=migrator t=2024-01-23T11:59:39.484607953Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-01-23T11:59:39.486541141Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.932908ms grafana | logger=migrator t=2024-01-23T11:59:39.492773759Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-01-23T11:59:39.493339657Z level=info msg="Migration successfully executed" id="Add missing user data" duration=565.749µs grafana | logger=migrator t=2024-01-23T11:59:39.500565675Z level=info msg="Executing migration" id="Add is_disabled column to user" policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-01-23T12:00:14.634+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-23T12:00:14.634+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-23T12:00:14.634+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011214634 policy-apex-pdp | [2024-01-23T12:00:14.635+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-23T12:00:14.637+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c16be33e-df41-44df-94a4-99528a749fa0, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-01-23T12:00:14.664+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-01-23T12:00:14.709+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-01-23T12:00:14.744+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-23T12:00:14.745+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-23T12:00:14.745+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011214744 policy-apex-pdp | [2024-01-23T12:00:14.747+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c16be33e-df41-44df-94a4-99528a749fa0, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-01-23T12:00:14.747+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-01-23T12:00:14.747+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-01-23T12:00:14.750+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-01-23T12:00:14.750+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-01-23T12:00:14.753+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-01-23T12:00:14.753+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-01-23T12:00:14.753+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-01-23T12:00:14.754+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5e219e28-7118-417e-b91d-edf2321c7473, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 policy-apex-pdp | [2024-01-23T12:00:14.754+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5e219e28-7118-417e-b91d-edf2321c7473, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-01-23T12:00:14.754+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-01-23T12:00:14.783+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-01-23T12:00:14.785+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c834911f-6dc0-4825-9b0e-296ed02f1e44","timestampMs":1706011214757,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-23T12:00:14.975+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-01-23T12:00:14.976+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-01-23T12:00:14.976+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-01-23T12:00:14.976+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-01-23T12:00:14.988+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-01-23T12:00:14.989+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-01-23T12:00:14.989+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2024-01-23T12:00:14.989+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@4628b1d3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@77cf3f8b{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-01-23T12:00:15.072+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: sXWmytVdQyKDGijCKdambA policy-apex-pdp | [2024-01-23T12:00:15.074+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-01-23T12:00:15.078+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Cluster ID: sXWmytVdQyKDGijCKdambA policy-apex-pdp | [2024-01-23T12:00:15.080+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-01-23T12:00:15.089+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] (Re-)joining group policy-apex-pdp | [2024-01-23T12:00:15.110+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Request joining group due to: need to re-join with the given member-id: consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93 policy-apex-pdp | [2024-01-23T12:00:15.110+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-01-23T12:00:15.110+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] (Re-)joining group policy-apex-pdp | [2024-01-23T12:00:15.653+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-01-23T12:00:15.655+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-01-23T12:00:18.116+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93', protocol='range'} policy-apex-pdp | [2024-01-23T12:00:18.125+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Finished assignment for group at generation 1: {consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-01-23T12:00:18.137+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93', protocol='range'} policy-apex-pdp | [2024-01-23T12:00:18.137+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-01-23T12:00:18.139+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,424] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,426] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-23 11:59:43,430] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-23 11:59:43,436] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-23 11:59:43,437] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-23 11:59:43,440] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-23 11:59:43,450] INFO Socket connection established, initiating session, client: /172.17.0.9:56922, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-23 11:59:43,460] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000043fa10001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-23 11:59:43,466] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-23 11:59:43,771] INFO Cluster ID = sXWmytVdQyKDGijCKdambA (kafka.server.KafkaServer) kafka | [2024-01-23 11:59:43,773] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-01-23 11:59:43,819] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null policy-apex-pdp | [2024-01-23T12:00:18.148+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Found no committed offset for partition policy-pdp-pap-0 zookeeper_1 | [2024-01-23 11:59:40,912] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-23T11:59:39.502545656Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.979271ms kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 policy-apex-pdp | [2024-01-23T12:00:18.161+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5e219e28-7118-417e-b91d-edf2321c7473-2, groupId=5e219e28-7118-417e-b91d-edf2321c7473] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | Waiting for mariadb port 3306... prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-23T11:59:39.508858747Z level=info msg="Executing migration" id="Add index user.login/user.email" policy-db-migrator | Waiting for mariadb port 3306... kafka | controller.quorum.election.timeout.ms = 1000 policy-apex-pdp | [2024-01-23T12:00:34.753+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | mariadb (172.17.0.3:3306) open prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-23T11:59:39.509844788Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=986.56µs simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused kafka | controller.quorum.fetch.timeout.ms = 2000 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"467b6bf8-582b-4dbd-92b4-9e245489db39","timestampMs":1706011234753,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} policy-pap | Waiting for kafka port 9092... prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-23T11:59:39.515147397Z level=info msg="Executing migration" id="Add is_service_account column to user" simulator | overriding logback.xml policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused kafka | controller.quorum.request.timeout.ms = 2000 policy-apex-pdp | [2024-01-23T12:00:34.781+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | kafka (172.17.0.9:9092) open prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" grafana | logger=migrator t=2024-01-23T11:59:39.516396131Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.249664ms simulator | 2024-01-23 11:59:36,739 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused kafka | controller.quorum.retry.backoff.ms = 20 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"467b6bf8-582b-4dbd-92b4-9e245489db39","timestampMs":1706011234753,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} policy-pap | Waiting for api port 6969... prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-23T11:59:39.522538674Z level=info msg="Executing migration" id="Update is_service_account column to nullable" simulator | 2024-01-23 11:59:36,817 INFO org.onap.policy.models.simulators starting policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused kafka | controller.quorum.voters = [] policy-apex-pdp | [2024-01-23T12:00:34.785+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | api (172.17.0.7:6969) open prometheus | ts=2024-01-23T11:59:35.248Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:36,817 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused kafka | controller.quota.window.num = 11 policy-apex-pdp | [2024-01-23T12:00:34.932+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml prometheus | ts=2024-01-23T11:59:35.250Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:37,036 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused kafka | controller.quota.window.size.seconds = 1 policy-apex-pdp | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json prometheus | ts=2024-01-23T11:59:35.251Z caller=main.go:1039 level=info msg="Starting TSDB ..." zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:37,037 INFO org.onap.policy.models.simulators starting A&AI simulator kafka | controller.socket.timeout.ms = 30000 policy-apex-pdp | [2024-01-23T12:00:34.944+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-pap | prometheus | ts=2024-01-23T11:59:35.257Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,913] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:37,164 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! kafka | create.topic.policy.class.name = null policy-apex-pdp | [2024-01-23T12:00:34.944+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-pap | . ____ _ __ _ _ prometheus | ts=2024-01-23T11:59:35.257Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:37,176 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d2388848-0012-45a5-abaf-541938745a99","timestampMs":1706011234943,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ prometheus | ts=2024-01-23T11:59:35.259Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:37,182 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | default.replication.factor = 1 policy-db-migrator | Preparing upgrade release version: 0900 policy-apex-pdp | [2024-01-23T12:00:34.945+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ prometheus | ts=2024-01-23T11:59:35.259Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.27µs zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-23T11:59:39.532387745Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.848831ms simulator | 2024-01-23 11:59:37,188 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 kafka | delegation.token.expiry.check.interval.ms = 3600000 policy-db-migrator | Preparing upgrade release version: 1000 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f2f3d3ad-4c80-4136-9424-630cff59eb41","timestampMs":1706011234944,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) prometheus | ts=2024-01-23T11:59:35.259Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,913] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:37,256 INFO Session workerName=node0 kafka | delegation.token.expiry.time.ms = 86400000 policy-db-migrator | Preparing upgrade release version: 1100 policy-apex-pdp | [2024-01-23T12:00:34.959+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / prometheus | ts=2024-01-23T11:59:35.260Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 zookeeper_1 | [2024-01-23 11:59:40,914] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-01-23 11:59:40,915] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:37,741 INFO Using GSON for REST calls kafka | delegation.token.master.key = null policy-db-migrator | Preparing upgrade release version: 1200 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d2388848-0012-45a5-abaf-541938745a99","timestampMs":1706011234943,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} policy-pap | =========|_|==============|___/=/_/_/_/ prometheus | ts=2024-01-23T11:59:35.260Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=28.441µs wal_replay_duration=375.77µs wbl_replay_duration=170ns total_replay_duration=437.353µs zookeeper_1 | [2024-01-23 11:59:40,915] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-23 11:59:40,916] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) simulator | 2024-01-23 11:59:37,830 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} kafka | delegation.token.max.lifetime.ms = 604800000 policy-db-migrator | Preparing upgrade release version: 1300 policy-apex-pdp | [2024-01-23T12:00:34.960+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | :: Spring Boot :: (v3.1.7) prometheus | ts=2024-01-23T11:59:35.262Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC zookeeper_1 | [2024-01-23 11:59:40,916] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) grafana | logger=migrator t=2024-01-23T11:59:39.538937878Z level=info msg="Executing migration" id="create temp user table v1-7" simulator | 2024-01-23 11:59:37,838 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} kafka | delegation.token.secret.key = null policy-db-migrator | Done policy-apex-pdp | [2024-01-23T12:00:34.960+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | prometheus | ts=2024-01-23T11:59:35.262Z caller=main.go:1063 level=info msg="TSDB started" zookeeper_1 | [2024-01-23 11:59:40,918] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-23 11:59:40,918] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) simulator | 2024-01-23 11:59:37,845 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1577ms kafka | delete.records.purgatory.purge.interval.requests = 1 policy-db-migrator | name version policy-pap | [2024-01-23T12:00:03.464+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f2f3d3ad-4c80-4136-9424-630cff59eb41","timestampMs":1706011234944,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} prometheus | ts=2024-01-23T11:59:35.262Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml zookeeper_1 | [2024-01-23 11:59:40,919] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-23 11:59:40,919] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) simulator | 2024-01-23 11:59:37,845 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4337 ms. kafka | delete.topic.enable = true policy-db-migrator | policyadmin 0 policy-pap | [2024-01-23T12:00:03.465+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-apex-pdp | [2024-01-23T12:00:34.961+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS prometheus | ts=2024-01-23T11:59:35.265Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.012099ms db_storage=1.62µs remote_storage=1.91µs web_handler=810ns query_engine=991ns scrape=237.172µs scrape_sd=137.567µs notify=33.812µs notify_sd=13.841µs rules=1.8µs tracing=5.36µs zookeeper_1 | [2024-01-23 11:59:40,919] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-23 11:59:40,919] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) simulator | 2024-01-23 11:59:37,850 INFO org.onap.policy.models.simulators starting SDNC simulator kafka | early.start.listeners = null policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-pap | [2024-01-23T12:00:05.335+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-apex-pdp | [2024-01-23T12:00:35.005+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] prometheus | ts=2024-01-23T11:59:35.265Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." zookeeper_1 | [2024-01-23 11:59:40,923] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-01-23T11:59:39.540395943Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.458254ms simulator | 2024-01-23 11:59:37,853 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START kafka | fetch.max.bytes = 57671680 policy-db-migrator | upgrade: 0 -> 1300 policy-pap | [2024-01-23T12:00:05.454+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 108 ms. Found 7 JPA repository interfaces. policy-apex-pdp | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} prometheus | ts=2024-01-23T11:59:35.266Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." grafana | logger=migrator t=2024-01-23T11:59:39.546984248Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-01-23T11:59:39.547822081Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=837.752µs simulator | 2024-01-23 11:59:37,854 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | fetch.purgatory.purge.interval.requests = 1000 policy-db-migrator | policy-pap | [2024-01-23T12:00:05.852+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-apex-pdp | [2024-01-23T12:00:35.008+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:39.555458429Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-01-23T11:59:39.556763696Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.298696ms simulator | 2024-01-23 11:59:37,855 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | group.consumer.assignors = [] policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-pap | [2024-01-23T12:00:05.853+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9ca47fc4-4a5a-4269-ac4b-8ea5170943ca","timestampMs":1706011235008,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:39.565505621Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" zookeeper_1 | [2024-01-23 11:59:40,923] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-01-23 11:59:37,856 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 kafka | group.consumer.heartbeat.interval.ms = 5000 policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:06.508+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-apex-pdp | [2024-01-23T12:00:35.016+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:39.566792036Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.284745ms simulator | 2024-01-23 11:59:37,871 INFO Session workerName=node0 kafka | group.consumer.max.heartbeat.interval.ms = 15000 zookeeper_1 | [2024-01-23 11:59:40,924] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-pap | [2024-01-23T12:00:06.518+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9ca47fc4-4a5a-4269-ac4b-8ea5170943ca","timestampMs":1706011235008,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:39.572614633Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" simulator | 2024-01-23 11:59:37,941 INFO Using GSON for REST calls kafka | group.consumer.max.session.timeout.ms = 60000 zookeeper_1 | [2024-01-23 11:59:40,924] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:06.520+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-apex-pdp | [2024-01-23T12:00:35.016+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-01-23T11:59:39.573490447Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=875.345µs simulator | 2024-01-23 11:59:37,951 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} kafka | group.consumer.max.size = 2147483647 zookeeper_1 | [2024-01-23 11:59:40,925] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | policy-pap | [2024-01-23T12:00:06.520+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-apex-pdp | [2024-01-23T12:00:35.052+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:39.577905762Z level=info msg="Executing migration" id="Update temp_user table charset" simulator | 2024-01-23 11:59:37,953 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} kafka | group.consumer.min.heartbeat.interval.ms = 5000 zookeeper_1 | [2024-01-23 11:59:40,954] INFO Logging initialized @573ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) policy-db-migrator | policy-pap | [2024-01-23T12:00:06.607+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-apex-pdp | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63a76968-fae6-4e69-9528-57bfc1bb20a8","timestampMs":1706011235028,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:39.578003037Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=98.525µs simulator | 2024-01-23 11:59:37,953 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1685ms kafka | group.consumer.min.session.timeout.ms = 45000 zookeeper_1 | [2024-01-23 11:59:41,054] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | [2024-01-23T12:00:06.607+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3063 ms policy-apex-pdp | [2024-01-23T12:00:35.054+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:39.584609683Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" simulator | 2024-01-23 11:59:37,953 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4902 ms. zookeeper_1 | [2024-01-23 11:59:41,054] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:07.036+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63a76968-fae6-4e69-9528-57bfc1bb20a8","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d7b3019b-93eb-43bb-bab7-dfe71e0e46ae","timestampMs":1706011235053,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:39.586003104Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.38467ms simulator | 2024-01-23 11:59:37,955 INFO org.onap.policy.models.simulators starting SO simulator zookeeper_1 | [2024-01-23 11:59:41,071] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-pap | [2024-01-23T12:00:07.122+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-apex-pdp | [2024-01-23T12:00:35.062+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:39.592004839Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" simulator | 2024-01-23 11:59:37,964 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START zookeeper_1 | [2024-01-23 11:59:41,106] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:07.125+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63a76968-fae6-4e69-9528-57bfc1bb20a8","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d7b3019b-93eb-43bb-bab7-dfe71e0e46ae","timestampMs":1706011235053,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:39.59279354Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=788.951µs simulator | 2024-01-23 11:59:37,965 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper_1 | [2024-01-23 11:59:41,107] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) policy-db-migrator | policy-pap | [2024-01-23T12:00:07.173+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-apex-pdp | [2024-01-23T12:00:35.062+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-01-23T11:59:39.599464859Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" kafka | group.consumer.session.timeout.ms = 45000 simulator | 2024-01-23 11:59:37,966 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper_1 | [2024-01-23 11:59:41,108] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) policy-db-migrator | policy-pap | [2024-01-23T12:00:07.532+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-apex-pdp | [2024-01-23T12:00:56.156+00:00|INFO|RequestLog|qtp830863979-33] 172.17.0.5 - policyadmin [23/Jan/2024:12:00:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.49.1" grafana | logger=migrator t=2024-01-23T11:59:39.600625058Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.160339ms kafka | group.coordinator.new.enable = false simulator | 2024-01-23 11:59:37,967 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 zookeeper_1 | [2024-01-23 11:59:41,111] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-pap | [2024-01-23T12:00:07.552+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-apex-pdp | [2024-01-23T12:01:56.079+00:00|INFO|RequestLog|qtp830863979-28] 172.17.0.5 - policyadmin [23/Jan/2024:12:01:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.49.1" grafana | logger=migrator t=2024-01-23T11:59:39.605581621Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" kafka | group.coordinator.threads = 1 simulator | 2024-01-23 11:59:37,971 INFO Session workerName=node0 zookeeper_1 | [2024-01-23 11:59:41,118] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:07.663+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4068102e grafana | logger=migrator t=2024-01-23T11:59:39.606771061Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.187191ms kafka | group.initial.rebalance.delay.ms = 3000 simulator | 2024-01-23 11:59:38,027 INFO Using GSON for REST calls zookeeper_1 | [2024-01-23 11:59:41,131] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-pap | [2024-01-23T12:00:07.665+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. grafana | logger=migrator t=2024-01-23T11:59:39.611627958Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" kafka | group.max.session.timeout.ms = 1800000 simulator | 2024-01-23 11:59:38,046 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} zookeeper_1 | [2024-01-23 11:59:41,131] INFO Started @750ms (org.eclipse.jetty.server.Server) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:07.695+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) grafana | logger=migrator t=2024-01-23T11:59:39.616434423Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.807435ms kafka | group.max.size = 2147483647 simulator | 2024-01-23 11:59:38,047 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} zookeeper_1 | [2024-01-23 11:59:41,131] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:39.620949363Z level=info msg="Executing migration" id="create temp_user v2" zookeeper_1 | [2024-01-23 11:59:41,136] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) policy-db-migrator | policy-pap | [2024-01-23T12:00:07.696+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead simulator | 2024-01-23 11:59:38,047 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @1780ms kafka | group.min.session.timeout.ms = 6000 grafana | logger=migrator t=2024-01-23T11:59:39.621829628Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=880.124µs zookeeper_1 | [2024-01-23 11:59:41,137] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-pap | [2024-01-23T12:00:09.591+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) simulator | 2024-01-23 11:59:38,047 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4919 ms. kafka | initial.broker.registration.timeout.ms = 60000 grafana | logger=migrator t=2024-01-23T11:59:39.625872573Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" zookeeper_1 | [2024-01-23 11:59:41,139] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:09.594+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' simulator | 2024-01-23 11:59:38,049 INFO org.onap.policy.models.simulators starting VFC simulator kafka | inter.broker.listener.name = PLAINTEXT grafana | logger=migrator t=2024-01-23T11:59:39.626706606Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=833.603µs zookeeper_1 | [2024-01-23 11:59:41,140] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-pap | [2024-01-23T12:00:10.153+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository simulator | 2024-01-23 11:59:38,056 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=migrator t=2024-01-23T11:59:39.630819415Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-01-23T11:59:39.631800745Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=981.39µs zookeeper_1 | [2024-01-23 11:59:41,152] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:10.703+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository simulator | 2024-01-23 11:59:38,057 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-01-23T11:59:39.635577237Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-01-23T11:59:39.636430621Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=853.194µs zookeeper_1 | [2024-01-23 11:59:41,152] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-pap | [2024-01-23T12:00:10.812+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository simulator | 2024-01-23 11:59:38,058 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-01-23T11:59:39.643551203Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-01-23T11:59:39.645098832Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.540578ms policy-db-migrator | zookeeper_1 | [2024-01-23 11:59:41,153] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) policy-pap | [2024-01-23T12:00:11.073+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: simulator | 2024-01-23 11:59:38,059 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 grafana | logger=migrator t=2024-01-23T11:59:39.650440984Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-01-23T11:59:39.650899277Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=458.083µs policy-db-migrator | policy-pap | allow.auto.create.topics = true simulator | 2024-01-23 11:59:38,076 INFO Session workerName=node0 kafka | inter.broker.protocol.version = 3.5-IV2 grafana | logger=migrator t=2024-01-23T11:59:39.653882459Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" zookeeper_1 | [2024-01-23 11:59:41,153] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-pap | auto.commit.interval.ms = 5000 simulator | 2024-01-23 11:59:38,145 INFO Using GSON for REST calls kafka | kafka.metrics.polling.interval.secs = 10 grafana | logger=migrator t=2024-01-23T11:59:39.654704961Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=821.892µs zookeeper_1 | [2024-01-23 11:59:41,157] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true simulator | 2024-01-23 11:59:38,153 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} kafka | kafka.metrics.reporters = [] grafana | logger=migrator t=2024-01-23T11:59:39.662173901Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" zookeeper_1 | [2024-01-23 11:59:41,157] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-pap | auto.offset.reset = latest simulator | 2024-01-23 11:59:38,154 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} kafka | leader.imbalance.check.interval.seconds = 300 grafana | logger=migrator t=2024-01-23T11:59:39.662516499Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=342.757µs zookeeper_1 | [2024-01-23 11:59:41,160] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] simulator | 2024-01-23 11:59:38,154 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @1887ms kafka | leader.imbalance.per.broker.percentage = 10 grafana | logger=migrator t=2024-01-23T11:59:39.675269598Z level=info msg="Executing migration" id="create star table" zookeeper_1 | [2024-01-23 11:59:41,161] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-db-migrator | policy-pap | check.crcs = true kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT simulator | 2024-01-23 11:59:38,154 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4903 ms. grafana | logger=migrator t=2024-01-23T11:59:39.676009095Z level=info msg="Migration successfully executed" id="create star table" duration=773.729µs zookeeper_1 | [2024-01-23 11:59:41,161] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | policy-pap | client.dns.lookup = use_all_dns_ips kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 simulator | 2024-01-23 11:59:38,156 INFO org.onap.policy.models.simulators started grafana | logger=migrator t=2024-01-23T11:59:39.69162302Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" zookeeper_1 | [2024-01-23 11:59:41,169] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-pap | client.id = consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-1 kafka | log.cleaner.backoff.ms = 15000 grafana | logger=migrator t=2024-01-23T11:59:39.692470773Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=849.393µs zookeeper_1 | [2024-01-23 11:59:41,169] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) policy-db-migrator | -------------- policy-pap | client.rack = kafka | log.cleaner.dedupe.buffer.size = 134217728 grafana | logger=migrator t=2024-01-23T11:59:39.700310792Z level=info msg="Executing migration" id="create org table v1" zookeeper_1 | [2024-01-23 11:59:41,193] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-pap | connections.max.idle.ms = 540000 kafka | log.cleaner.delete.retention.ms = 86400000 grafana | logger=migrator t=2024-01-23T11:59:39.701020508Z level=info msg="Migration successfully executed" id="create org table v1" duration=709.986µs zookeeper_1 | [2024-01-23 11:59:41,194] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) policy-db-migrator | -------------- policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-23T11:59:39.707219524Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-db-migrator | policy-pap | enable.auto.commit = true kafka | log.cleaner.enable = true zookeeper_1 | [2024-01-23 11:59:42,149] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) policy-db-migrator | policy-pap | exclude.internal.topics = true kafka | log.cleaner.io.buffer.load.factor = 0.9 grafana | logger=migrator t=2024-01-23T11:59:39.707907759Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=688.035µs policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-pap | fetch.max.bytes = 52428800 kafka | log.cleaner.io.buffer.size = 524288 grafana | logger=migrator t=2024-01-23T11:59:39.713266702Z level=info msg="Executing migration" id="create org_user table v1" policy-db-migrator | -------------- policy-pap | fetch.max.wait.ms = 500 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | fetch.min.bytes = 1 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 policy-db-migrator | -------------- policy-pap | group.id = 7faaa365-1216-4c85-9c2d-e9bca189fc3d grafana | logger=migrator t=2024-01-23T11:59:39.714238481Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=998.071µs kafka | log.cleaner.threads = 1 policy-db-migrator | policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-23T11:59:39.724546316Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" kafka | log.cleanup.policy = [delete] policy-db-migrator | policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-23T11:59:39.726436022Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.888006ms kafka | log.dir = /tmp/kafka-logs policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-23T11:59:39.735292273Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" kafka | log.dirs = /var/lib/kafka/data policy-db-migrator | -------------- policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-23T11:59:39.736146216Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=854.253µs kafka | log.flush.interval.messages = 9223372036854775807 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-01-23T11:59:39.749888826Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" kafka | log.flush.interval.ms = null policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-01-23T11:59:39.751454145Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.565209ms kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-23T11:59:39.756468401Z level=info msg="Executing migration" id="Update org table charset" kafka | log.flush.scheduler.interval.ms = 9223372036854775807 policy-db-migrator | policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-23T11:59:39.756509693Z level=info msg="Migration successfully executed" id="Update org table charset" duration=42.982µs kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 policy-db-migrator | policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-23T11:59:39.765519471Z level=info msg="Executing migration" id="Update org_user table charset" kafka | log.index.interval.bytes = 4096 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-23T11:59:39.765570814Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=55.103µs kafka | log.index.size.max.bytes = 10485760 policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-23T11:59:39.77590519Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" kafka | log.message.downconversion.enable = true policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-23T11:59:39.776275389Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=382.96µs kafka | log.message.format.version = 3.0-IV1 policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-23T11:59:39.781737157Z level=info msg="Executing migration" id="create dashboard table" kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-pap | metrics.recording.level = INFO kafka | log.message.timestamp.type = CreateTime policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:39.782954049Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.214541ms policy-pap | metrics.sample.window.ms = 30000 kafka | log.preallocate = false policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:39.825516735Z level=info msg="Executing migration" id="add index dashboard.account_id" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | log.retention.bytes = -1 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql grafana | logger=migrator t=2024-01-23T11:59:39.827357009Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.846424ms kafka | log.retention.check.interval.ms = 300000 grafana | logger=migrator t=2024-01-23T11:59:39.833574485Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | -------------- kafka | log.retention.hours = 168 grafana | logger=migrator t=2024-01-23T11:59:39.835184747Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.609482ms policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:39.839015242Z level=info msg="Executing migration" id="create dashboard_tag table" policy-db-migrator | -------------- kafka | log.retention.minutes = null policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-23T11:59:39.839486116Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=470.914µs policy-db-migrator | kafka | log.retention.ms = null policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:39.845091282Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" policy-db-migrator | kafka | log.roll.hours = 168 policy-pap | retry.backoff.ms = 100 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-23T11:59:39.84682983Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.737739ms kafka | log.roll.jitter.hours = 0 policy-db-migrator | -------------- policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-23T11:59:39.851298737Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" kafka | log.roll.jitter.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-23T11:59:39.852158921Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=860.084µs kafka | log.roll.ms = null policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-23T11:59:39.857621149Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" kafka | log.segment.bytes = 1073741824 policy-db-migrator | policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-23T11:59:39.866150173Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.516814ms kafka | log.segment.delete.delay.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:39.879139715Z level=info msg="Executing migration" id="create dashboard v2" kafka | max.connection.creation.rate = 2147483647 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-23T11:59:39.880192798Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.044714ms kafka | max.connections = 2147483647 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | max.connections.per.ip = 2147483647 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql grafana | logger=migrator t=2024-01-23T11:59:39.888401696Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" policy-pap | sasl.login.callback.handler.class = null kafka | max.connections.per.ip.overrides = policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:39.890535845Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=2.108947ms policy-pap | sasl.login.class = null kafka | max.incremental.fetch.session.cache.slots = 1000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:39.897561182Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" policy-pap | sasl.login.connect.timeout.ms = null kafka | message.max.bytes = 1048588 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:39.898470019Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=912.086µs kafka | metadata.log.dir = null policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-23T11:59:39.900976136Z level=info msg="Executing migration" id="copy dashboard v1 to v2" kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-23T11:59:39.901352305Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=376.079µs kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-23T11:59:39.907229324Z level=info msg="Executing migration" id="drop table dashboard_v1" kafka | metadata.log.segment.bytes = 1073741824 policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-23T11:59:39.908447266Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.217722ms kafka | metadata.log.segment.min.bytes = 8388608 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-23T11:59:39.913506404Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" kafka | metadata.log.segment.ms = 604800000 policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-23T11:59:39.913689733Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=183.539µs kafka | metadata.max.idle.interval.ms = 500 policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-23T11:59:39.919757972Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | metadata.max.retention.bytes = 104857600 grafana | logger=migrator t=2024-01-23T11:59:39.922814808Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.067167ms policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | metadata.max.retention.ms = 604800000 grafana | logger=migrator t=2024-01-23T11:59:39.928127598Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null kafka | metric.reporters = [] grafana | logger=migrator t=2024-01-23T11:59:39.930167192Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.038774ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.expected.issuer = null kafka | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-23T11:59:39.93425748Z level=info msg="Executing migration" id="Add column gnetId in dashboard" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-23T11:59:39.936106564Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.849164ms policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:39.946296483Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | min.insync.replicas = 1 grafana | logger=migrator t=2024-01-23T11:59:39.94801705Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.720147ms policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | node.id = 1 grafana | logger=migrator t=2024-01-23T11:59:39.953400134Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | num.io.threads = 8 grafana | logger=migrator t=2024-01-23T11:59:39.956289141Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.889387ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | num.network.threads = 3 grafana | logger=migrator t=2024-01-23T11:59:39.962116068Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | num.partitions = 1 grafana | logger=migrator t=2024-01-23T11:59:39.963039165Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=921.167µs policy-db-migrator | policy-pap | security.protocol = PLAINTEXT kafka | num.recovery.threads.per.data.dir = 1 grafana | logger=migrator t=2024-01-23T11:59:39.970371768Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" policy-db-migrator | policy-pap | security.providers = null kafka | num.replica.alter.log.dirs.threads = null grafana | logger=migrator t=2024-01-23T11:59:39.971825732Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.454994ms policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-pap | send.buffer.bytes = 131072 kafka | num.replica.fetchers = 1 grafana | logger=migrator t=2024-01-23T11:59:39.97767401Z level=info msg="Executing migration" id="Update dashboard table charset" policy-db-migrator | -------------- policy-pap | session.timeout.ms = 45000 kafka | offset.metadata.max.bytes = 4096 grafana | logger=migrator t=2024-01-23T11:59:39.977760824Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=87.114µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | offsets.commit.required.acks = -1 grafana | logger=migrator t=2024-01-23T11:59:39.98100935Z level=info msg="Executing migration" id="Update dashboard_tag table charset" policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | offsets.commit.timeout.ms = 5000 grafana | logger=migrator t=2024-01-23T11:59:39.981036251Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.071µs policy-db-migrator | policy-pap | ssl.cipher.suites = null kafka | offsets.load.buffer.size = 5242880 grafana | logger=migrator t=2024-01-23T11:59:39.984232744Z level=info msg="Executing migration" id="Add column folder_id in dashboard" policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | offsets.retention.check.interval.ms = 600000 grafana | logger=migrator t=2024-01-23T11:59:39.986947902Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.714178ms policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-pap | ssl.endpoint.identification.algorithm = https kafka | offsets.retention.minutes = 10080 grafana | logger=migrator t=2024-01-23T11:59:39.99181115Z level=info msg="Executing migration" id="Add column isFolder in dashboard" policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null kafka | offsets.topic.compression.codec = 0 grafana | logger=migrator t=2024-01-23T11:59:39.993861564Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.050195ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | ssl.key.password = null kafka | offsets.topic.num.partitions = 50 grafana | logger=migrator t=2024-01-23T11:59:39.996886578Z level=info msg="Executing migration" id="Add column has_acl in dashboard" policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 kafka | offsets.topic.replication.factor = 1 grafana | logger=migrator t=2024-01-23T11:59:40.00086244Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.977872ms policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null kafka | offsets.topic.segment.bytes = 104857600 grafana | logger=migrator t=2024-01-23T11:59:40.004332677Z level=info msg="Executing migration" id="Add column uid in dashboard" policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding grafana | logger=migrator t=2024-01-23T11:59:40.007378491Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.045274ms policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-pap | ssl.keystore.location = null kafka | password.encoder.iterations = 4096 grafana | logger=migrator t=2024-01-23T11:59:40.01052832Z level=info msg="Executing migration" id="Update uid column values in dashboard" policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null kafka | password.encoder.key.length = 128 grafana | logger=migrator t=2024-01-23T11:59:40.010736211Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=207.89µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | ssl.keystore.type = JKS kafka | password.encoder.keyfactory.algorithm = null grafana | logger=migrator t=2024-01-23T11:59:40.014460609Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | password.encoder.old.secret = null grafana | logger=migrator t=2024-01-23T11:59:40.015337053Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=876.154µs policy-db-migrator | policy-pap | ssl.provider = null kafka | password.encoder.secret = null grafana | logger=migrator t=2024-01-23T11:59:40.018651491Z level=info msg="Executing migration" id="Remove unique index org_id_slug" policy-db-migrator | policy-pap | ssl.secure.random.implementation = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder grafana | logger=migrator t=2024-01-23T11:59:40.019791359Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.139377ms policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-pap | ssl.trustmanager.algorithm = PKIX kafka | process.roles = [] grafana | logger=migrator t=2024-01-23T11:59:40.023317737Z level=info msg="Executing migration" id="Update dashboard title length" policy-pap | ssl.truststore.certificates = null kafka | producer.id.expiration.check.interval.ms = 600000 grafana | logger=migrator t=2024-01-23T11:59:40.023357129Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=40.522µs policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | ssl.truststore.password = null kafka | queued.max.request.bytes = -1 grafana | logger=migrator t=2024-01-23T11:59:40.02695037Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-23T11:59:40.028364102Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.416932ms policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | queued.max.requests = 500 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.031562183Z level=info msg="Executing migration" id="create dashboard_provisioning" policy-pap | kafka | quota.window.num = 11 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.032219927Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=657.534µs kafka | quota.window.size.seconds = 1 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-pap | [2024-01-23T12:00:11.229+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-23T11:59:40.03545161Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:11.229+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-23T11:59:40.042791351Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.338621ms kafka | remote.log.manager.task.interval.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | [2024-01-23T12:00:11.229+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011211228 grafana | logger=migrator t=2024-01-23T11:59:40.045882557Z level=info msg="Executing migration" id="create dashboard_provisioning v2" kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:11.231+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-1, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-01-23T11:59:40.046543181Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=663.254µs kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-pap | [2024-01-23T12:00:11.232+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-01-23T11:59:40.049623726Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" kafka | remote.log.manager.task.retry.jitter = 0.2 policy-db-migrator | policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-01-23T11:59:40.050711811Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.087665ms kafka | remote.log.manager.thread.pool.size = 10 policy-db-migrator | policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-01-23T11:59:40.053685222Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" kafka | remote.log.metadata.manager.class.name = null policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-01-23T11:59:40.054602328Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=916.696µs kafka | remote.log.metadata.manager.class.path = null policy-db-migrator | -------------- policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-01-23T11:59:40.057635981Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" kafka | remote.log.metadata.manager.impl.prefix = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-01-23T11:59:40.057951097Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=315.046µs kafka | remote.log.metadata.manager.listener.name = null policy-db-migrator | -------------- policy-pap | check.crcs = true grafana | logger=migrator t=2024-01-23T11:59:40.060955499Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" kafka | remote.log.reader.max.pending.tasks = 100 policy-db-migrator | policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-01-23T11:59:40.061528888Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=573.289µs kafka | remote.log.reader.threads = 10 policy-db-migrator | policy-pap | client.id = consumer-policy-pap-2 grafana | logger=migrator t=2024-01-23T11:59:40.064362101Z level=info msg="Executing migration" id="Add check_sum column" kafka | remote.log.storage.manager.class.name = null policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-pap | client.rack = grafana | logger=migrator t=2024-01-23T11:59:40.066491049Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.128728ms kafka | remote.log.storage.manager.class.path = null policy-db-migrator | -------------- policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-01-23T11:59:40.069402896Z level=info msg="Executing migration" id="Add index for dashboard_title" kafka | remote.log.storage.manager.impl.prefix = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-23T11:59:40.070593126Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.18969ms kafka | remote.log.storage.system.enable = false policy-db-migrator | -------------- policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-01-23T11:59:40.074379028Z level=info msg="Executing migration" id="delete tags for deleted dashboards" kafka | replica.fetch.backoff.ms = 1000 policy-db-migrator | policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-01-23T11:59:40.074568577Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=189.95µs kafka | replica.fetch.max.bytes = 1048576 policy-db-migrator | policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-01-23T11:59:40.077464053Z level=info msg="Executing migration" id="delete stars for deleted dashboards" kafka | replica.fetch.min.bytes = 1 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-01-23T11:59:40.077660063Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=172.949µs kafka | replica.fetch.response.max.bytes = 10485760 policy-db-migrator | -------------- policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-01-23T11:59:40.08174695Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" kafka | replica.fetch.wait.max.ms = 500 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | group.id = policy-pap kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-23T11:59:40.082617084Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=870.084µs policy-db-migrator | -------------- kafka | replica.lag.time.max.ms = 30000 policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-23T11:59:40.086040577Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-db-migrator | policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-23T11:59:40.089821638Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.780101ms policy-db-migrator | kafka | replica.selector.class = null policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-23T11:59:40.093435671Z level=info msg="Executing migration" id="create data_source table" policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql kafka | replica.socket.receive.buffer.bytes = 65536 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.094314515Z level=info msg="Migration successfully executed" id="create data_source table" duration=876.984µs kafka | replica.socket.timeout.ms = 30000 policy-pap | isolation.level = read_uncommitted policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-01-23T11:59:40.098925828Z level=info msg="Executing migration" id="add index data_source.account_id" kafka | replication.quota.window.num = 11 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.100020804Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.095495ms policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.103448217Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" kafka | replication.quota.window.size.seconds = 1 policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | kafka | request.timeout.ms = 30000 policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-23T11:59:40.104455678Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.007131ms kafka | reserved.broker.max.id = 1000 policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-01-23T11:59:40.107699432Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-23T11:59:40.108584816Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=889.005µs kafka | sasl.client.callback.handler.class = null policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-23T11:59:40.112874463Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" kafka | sasl.enabled.mechanisms = [GSSAPI] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-23T11:59:40.114160908Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.279545ms kafka | sasl.jaas.config = null policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:40.117860315Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-01-23T11:59:40.128910274Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.048388ms policy-db-migrator | policy-pap | receive.buffer.bytes = 65536 kafka | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-23T11:59:40.154867216Z level=info msg="Executing migration" id="create data_source table v2" policy-pap | reconnect.backoff.max.ms = 1000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql grafana | logger=migrator t=2024-01-23T11:59:40.156576242Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.706817ms policy-pap | reconnect.backoff.ms = 50 kafka | sasl.kerberos.service.name = null policy-db-migrator | -------------- policy-pap | request.timeout.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:40.161439888Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.162370455Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=930.037µs kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.16583946Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" kafka | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.166737526Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=897.705µs policy-pap | sasl.jaas.config = null kafka | sasl.login.class = null policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql grafana | logger=migrator t=2024-01-23T11:59:40.172287676Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.173196292Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=906.716µs policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-23T11:59:40.177221715Z level=info msg="Executing migration" id="Add column with_credentials" policy-pap | sasl.kerberos.service.name = null kafka | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:40.181364415Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.1419ms kafka | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.193698368Z level=info msg="Executing migration" id="Add secure json data column" kafka | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.196377754Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.720578ms kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.2370666Z level=info msg="Executing migration" id="Update data_source table charset" kafka | sasl.login.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql grafana | logger=migrator t=2024-01-23T11:59:40.237158525Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=95.035µs policy-pap | sasl.login.class = null kafka | sasl.mechanism.controller.protocol = GSSAPI policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.249827735Z level=info msg="Executing migration" id="Update initial version to 1" policy-pap | sasl.login.connect.timeout.ms = null kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:40.250158712Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=334.417µs policy-pap | sasl.login.read.timeout.ms = null kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.261306795Z level=info msg="Executing migration" id="Add read_only data column" policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.oauthbearer.expected.audience = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.266014353Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.707908ms policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.oauthbearer.expected.issuer = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.272497081Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql grafana | logger=migrator t=2024-01-23T11:59:40.272736413Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=240.352µs policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.282423862Z level=info msg="Executing migration" id="Update json_data with nulls" policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:40.282718586Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=295.975µs policy-pap | sasl.login.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.291458098Z level=info msg="Executing migration" id="Add uid column" policy-pap | sasl.mechanism = GSSAPI kafka | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.298625Z level=info msg="Migration successfully executed" id="Add uid column" duration=7.161142ms policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.322259995Z level=info msg="Executing migration" id="Update uid value" policy-pap | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql grafana | logger=migrator t=2024-01-23T11:59:40.322474536Z level=info msg="Migration successfully executed" id="Update uid value" duration=216.591µs policy-pap | sasl.oauthbearer.expected.issuer = null kafka | sasl.server.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.326555042Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.server.max.receive.size = 524288 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:40.327426956Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=871.964µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | security.inter.broker.protocol = PLAINTEXT policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.331014217Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | security.providers = null grafana | logger=migrator t=2024-01-23T11:59:40.332689192Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.690406ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | server.max.startup.time.ms = 9223372036854775807 policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.342915539Z level=info msg="Executing migration" id="create api_key table" kafka | socket.connection.setup.timeout.max.ms = 30000 policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql grafana | logger=migrator t=2024-01-23T11:59:40.343846166Z level=info msg="Migration successfully executed" id="create api_key table" duration=930.857µs policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-23T11:59:40.350535504Z level=info msg="Executing migration" id="add index api_key.account_id" kafka | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-23T11:59:40.35144114Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=900.395µs kafka | socket.listen.backlog.size = 50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-pap | security.providers = null grafana | logger=migrator t=2024-01-23T11:59:40.355747697Z level=info msg="Executing migration" id="add index api_key.key" kafka | socket.receive.buffer.bytes = 102400 policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-23T11:59:40.356704016Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=956.939µs kafka | socket.request.max.bytes = 104857600 policy-db-migrator | policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-01-23T11:59:40.359904498Z level=info msg="Executing migration" id="add index api_key.account_id_name" kafka | socket.send.buffer.bytes = 102400 policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:40.360885047Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=977.33µs kafka | ssl.cipher.suites = [] policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-23T11:59:40.367938834Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" kafka | ssl.client.auth = none policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-23T11:59:40.368811058Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=872.394µs policy-db-migrator | -------------- kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-23T11:59:40.371767917Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | ssl.endpoint.identification.algorithm = https policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-23T11:59:40.372629121Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=859.794µs policy-db-migrator | -------------- kafka | ssl.engine.factory.class = null policy-pap | ssl.engine.factory.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.378309498Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" kafka | ssl.key.password = null policy-pap | ssl.key.password = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.379183042Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=875.084µs kafka | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql grafana | logger=migrator t=2024-01-23T11:59:40.390170847Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" kafka | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.400335721Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.166574ms kafka | ssl.keystore.key = null grafana | logger=migrator t=2024-01-23T11:59:40.406138894Z level=info msg="Executing migration" id="create api_key table v2" policy-pap | ssl.keystore.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:40.406798228Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=661.414µs policy-pap | ssl.keystore.location = null kafka | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.410366248Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" policy-pap | ssl.keystore.password = null kafka | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.41178921Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.423782ms kafka | ssl.keystore.type = JKS policy-db-migrator | policy-pap | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-23T11:59:40.416359551Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" kafka | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null grafana | logger=migrator t=2024-01-23T11:59:40.417774712Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.404661ms policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.461140054Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" kafka | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-23T11:59:40.463261031Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=2.123587ms kafka | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-23T11:59:40.468663945Z level=info msg="Executing migration" id="copy api_key v1 to v2" kafka | ssl.trustmanager.algorithm = PKIX policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.469435034Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=771.619µs kafka | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.473055867Z level=info msg="Executing migration" id="Drop old table api_key_v1" kafka | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql kafka | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-23T11:59:40.474079978Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.024692ms policy-db-migrator | -------------- kafka | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-23T11:59:40.480727604Z level=info msg="Executing migration" id="Update api_key table charset" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-pap | grafana | logger=migrator t=2024-01-23T11:59:40.480808318Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=79.384µs policy-db-migrator | -------------- kafka | transaction.max.timeout.ms = 900000 policy-pap | [2024-01-23T12:00:11.238+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-23T11:59:40.485370169Z level=info msg="Executing migration" id="Add expires to api_key table" policy-db-migrator | kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-pap | [2024-01-23T12:00:11.238+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-23T11:59:40.488611723Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.241244ms policy-db-migrator | kafka | transaction.state.log.load.buffer.size = 5242880 policy-pap | [2024-01-23T12:00:11.238+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011211238 grafana | logger=migrator t=2024-01-23T11:59:40.494416936Z level=info msg="Executing migration" id="Add service account foreign key" policy-db-migrator | > upgrade 0450-pdpgroup.sql kafka | transaction.state.log.min.isr = 2 policy-pap | [2024-01-23T12:00:11.238+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-01-23T11:59:40.500132485Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=5.717109ms policy-db-migrator | -------------- kafka | transaction.state.log.num.partitions = 50 grafana | logger=migrator t=2024-01-23T11:59:40.50655467Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" kafka | transaction.state.log.replication.factor = 3 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-pap | [2024-01-23T12:00:11.576+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json grafana | logger=migrator t=2024-01-23T11:59:40.506683106Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=128.696µs kafka | transaction.state.log.segment.bytes = 104857600 policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:11.743+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning grafana | logger=migrator t=2024-01-23T11:59:40.519390398Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-01-23T11:59:40.523440093Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.045065ms policy-db-migrator | policy-pap | [2024-01-23T12:00:12.032+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1cdad619, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@319058ce, org.springframework.security.web.context.SecurityContextHolderFilter@1fa796a4, org.springframework.security.web.header.HeaderWriterFilter@3879feec, org.springframework.security.web.authentication.logout.LogoutFilter@259c6ab8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@13018f00, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@8dcacf1, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@73c09a98, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3909308c, org.springframework.security.web.access.ExceptionTranslationFilter@280c3dc0, org.springframework.security.web.access.intercept.AuthorizationFilter@44a9971f] kafka | transactional.id.expiration.ms = 604800000 grafana | logger=migrator t=2024-01-23T11:59:40.528800494Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-pap | [2024-01-23T12:00:12.932+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' kafka | unclean.leader.election.enable = false policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.53130329Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.502476ms policy-pap | [2024-01-23T12:00:13.034+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] kafka | unstable.api.versions.enable = false policy-db-migrator | > upgrade 0460-pdppolicystatus.sql grafana | logger=migrator t=2024-01-23T11:59:40.541503926Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" policy-pap | [2024-01-23T12:00:13.059+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' kafka | zookeeper.clientCnxnSocket = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.542593011Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.088905ms policy-pap | [2024-01-23T12:00:13.078+00:00|INFO|ServiceManager|main] Policy PAP starting kafka | zookeeper.connect = zookeeper:2181 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-23T11:59:40.64546347Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-pap | [2024-01-23T12:00:13.078+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry kafka | zookeeper.connection.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.645967486Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=507.056µs policy-pap | [2024-01-23T12:00:13.078+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters kafka | zookeeper.max.in.flight.requests = 10 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.65219169Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-pap | [2024-01-23T12:00:13.079+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener kafka | zookeeper.metadata.migration.enable = false policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.653014312Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=822.472µs policy-pap | [2024-01-23T12:00:13.079+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher kafka | zookeeper.session.timeout.ms = 18000 policy-db-migrator | > upgrade 0470-pdp.sql grafana | logger=migrator t=2024-01-23T11:59:40.658849877Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-pap | [2024-01-23T12:00:13.080+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher kafka | zookeeper.set.acl = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.660708811Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.858424ms policy-pap | [2024-01-23T12:00:13.080+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher kafka | zookeeper.ssl.cipher.suites = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-23T11:59:40.664328484Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-pap | [2024-01-23T12:00:13.086+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7faaa365-1216-4c85-9c2d-e9bca189fc3d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@166d576b kafka | zookeeper.ssl.client.enable = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.665126654Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=798.47µs policy-pap | [2024-01-23T12:00:13.097+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7faaa365-1216-4c85-9c2d-e9bca189fc3d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | zookeeper.ssl.crl.enable = false policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.669288515Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-pap | [2024-01-23T12:00:13.098+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | zookeeper.ssl.enabled.protocols = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.670133577Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=844.853µs policy-pap | allow.auto.create.topics = true policy-db-migrator | > upgrade 0480-pdpstatistics.sql kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-01-23T11:59:40.675882578Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-01-23T11:59:40.675952411Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=75.474µs policy-pap | auto.include.jmx.reporter = true policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-01-23T11:59:40.679611916Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" policy-pap | auto.offset.reset = latest policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-01-23T11:59:40.679632837Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.001µs policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | kafka | zookeeper.ssl.ocsp.enable = false grafana | logger=migrator t=2024-01-23T11:59:40.684859312Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-pap | check.crcs = true policy-db-migrator | kafka | zookeeper.ssl.protocol = TLSv1.2 grafana | logger=migrator t=2024-01-23T11:59:40.687748928Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.890236ms policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql kafka | zookeeper.ssl.truststore.location = null grafana | logger=migrator t=2024-01-23T11:59:40.69373105Z level=info msg="Executing migration" id="Add encrypted dashboard json column" policy-pap | client.id = consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3 policy-db-migrator | -------------- kafka | zookeeper.ssl.truststore.password = null grafana | logger=migrator t=2024-01-23T11:59:40.696401005Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.674275ms policy-pap | client.rack = kafka | zookeeper.ssl.truststore.type = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-23T11:59:40.70481878Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" policy-pap | connections.max.idle.ms = 540000 kafka | (kafka.server.KafkaConfig) kafka | [2024-01-23 11:59:43,847] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-01-23T11:59:40.704905705Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=85.734µs grafana | logger=migrator t=2024-01-23T11:59:40.710044704Z level=info msg="Executing migration" id="create quota table v1" policy-pap | default.api.timeout.ms = 60000 kafka | [2024-01-23 11:59:43,851] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-01-23T11:59:40.711171971Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.127417ms policy-pap | enable.auto.commit = true kafka | [2024-01-23 11:59:43,852] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) grafana | logger=migrator t=2024-01-23T11:59:40.714459438Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-db-migrator | -------------- kafka | [2024-01-23 11:59:43,854] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-01-23T11:59:40.715616066Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.157649ms policy-db-migrator | kafka | [2024-01-23 11:59:43,887] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:40.720972977Z level=info msg="Executing migration" id="Update quota table charset" kafka | [2024-01-23 11:59:43,893] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:40.721002828Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=31.081µs grafana | logger=migrator t=2024-01-23T11:59:40.723964118Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-01-23T11:59:40.724666243Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=701.945µs kafka | [2024-01-23 11:59:43,904] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:40.727668085Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 kafka | [2024-01-23 11:59:43,905] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:40.72915603Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.481435ms policy-db-migrator | policy-pap | fetch.min.bytes = 1 kafka | [2024-01-23 11:59:43,906] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:40.735103151Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-pap | group.id = 7faaa365-1216-4c85-9c2d-e9bca189fc3d kafka | [2024-01-23 11:59:43,916] INFO Starting the log cleaner (kafka.log.LogCleaner) grafana | logger=migrator t=2024-01-23T11:59:40.737113823Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.010521ms policy-db-migrator | -------------- policy-pap | group.instance.id = null kafka | [2024-01-23 11:59:43,966] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) grafana | logger=migrator t=2024-01-23T11:59:40.739958596Z level=info msg="Executing migration" id="Update plugin_setting table charset" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-23 11:59:43,983] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) grafana | logger=migrator t=2024-01-23T11:59:40.739978247Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=20.171µs policy-db-migrator | -------------- policy-pap | interceptor.classes = [] kafka | [2024-01-23 11:59:44,017] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-01-23T11:59:40.743692445Z level=info msg="Executing migration" id="create session table" policy-db-migrator | policy-pap | internal.leave.group.on.close = true kafka | [2024-01-23 11:59:44,080] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-23T11:59:40.744438103Z level=info msg="Migration successfully executed" id="create session table" duration=745.308µs policy-db-migrator | policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-01-23 11:59:44,447] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-01-23T11:59:40.748856036Z level=info msg="Executing migration" id="Drop old table playlist table" policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-pap | isolation.level = read_uncommitted kafka | [2024-01-23 11:59:44,473] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) grafana | logger=migrator t=2024-01-23T11:59:40.74894074Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=85.034µs policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-23 11:59:44,474] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-01-23T11:59:40.751569433Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-01-23 11:59:44,479] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) grafana | logger=migrator t=2024-01-23T11:59:40.751649277Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=80.004µs policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 kafka | [2024-01-23 11:59:44,483] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-23T11:59:40.753850809Z level=info msg="Executing migration" id="create playlist table v2" policy-db-migrator | policy-pap | max.poll.records = 500 kafka | [2024-01-23 11:59:44,502] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-23T11:59:40.754487401Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=636.043µs policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-23 11:59:44,504] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-23 11:59:44,506] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-23T11:59:40.757347865Z level=info msg="Executing migration" id="create playlist item table v2" kafka | [2024-01-23 11:59:44,506] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-23 11:59:44,521] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-23T11:59:40.758005889Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=658.103µs kafka | [2024-01-23 11:59:44,541] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-01-23 11:59:44,579] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1706011184553,1706011184553,1,0,0,72057612285313025,258,0,27 policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-23T11:59:40.764343679Z level=info msg="Executing migration" id="Update playlist table charset" kafka | (kafka.zk.KafkaZkClient) kafka | [2024-01-23 11:59:44,581] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-23T11:59:40.764379951Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=38.422µs kafka | [2024-01-23 11:59:44,664] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-01-23 11:59:44,671] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:40.767360211Z level=info msg="Executing migration" id="Update playlist_item table charset" kafka | [2024-01-23 11:59:44,682] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-23 11:59:44,682] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-01-23T11:59:40.767408944Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=49.303µs kafka | [2024-01-23 11:59:44,685] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-01-23 11:59:44,699] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-01-23T11:59:40.770001075Z level=info msg="Executing migration" id="Add playlist column created_at" kafka | [2024-01-23 11:59:44,703] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-01-23 11:59:44,705] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-23T11:59:40.773650219Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.648734ms kafka | [2024-01-23 11:59:44,706] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-01-23 11:59:44,710] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-23T11:59:40.779706585Z level=info msg="Executing migration" id="Add playlist column updated_at" kafka | [2024-01-23 11:59:44,728] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-01-23 11:59:44,732] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:40.781836943Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.132668ms kafka | [2024-01-23 11:59:44,737] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-01-23 11:59:44,739] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-23T11:59:40.787633156Z level=info msg="Executing migration" id="drop preferences table v2" kafka | [2024-01-23 11:59:44,739] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-01-23 11:59:44,744] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-23T11:59:40.787763263Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=130.996µs kafka | [2024-01-23 11:59:44,746] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-01-23 11:59:44,749] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-23T11:59:40.79126581Z level=info msg="Executing migration" id="drop preferences table v3" kafka | [2024-01-23 11:59:44,766] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-01-23 11:59:44,772] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-23T11:59:40.791391396Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=122.847µs policy-db-migrator | kafka | [2024-01-23 11:59:44,774] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-23T11:59:40.803444715Z level=info msg="Executing migration" id="create preferences table v3" policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql kafka | [2024-01-23 11:59:44,782] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-23T11:59:40.804211534Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=798.121µs policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,788] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-23T11:59:40.811241719Z level=info msg="Executing migration" id="Update preferences table charset" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) kafka | [2024-01-23 11:59:44,790] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-23T11:59:40.811274511Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=33.692µs policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,790] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-23T11:59:40.814052291Z level=info msg="Executing migration" id="Add column team_id in preferences" policy-db-migrator | policy-pap | sasl.login.class = null kafka | [2024-01-23 11:59:44,790] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.818101346Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.048545ms policy-db-migrator | policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-01-23 11:59:44,791] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.821275696Z level=info msg="Executing migration" id="Update team_id column values in preferences" policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-23 11:59:44,794] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.821478927Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=204.03µs policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-23 11:59:44,795] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.824484428Z level=info msg="Executing migration" id="Add column week_start in preferences" policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-23T11:59:40.829175196Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.689727ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-23 11:59:44,795] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-23T11:59:40.834937907Z level=info msg="Executing migration" id="Add column preferences.json_data" policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,796] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-23T11:59:40.837327448Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.392751ms policy-db-migrator | kafka | [2024-01-23 11:59:44,798] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-23T11:59:40.840419404Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" policy-db-migrator | kafka | [2024-01-23 11:59:44,802] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-23T11:59:40.840492638Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=73.733µs policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql kafka | [2024-01-23 11:59:44,804] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-23T11:59:40.843210735Z level=info msg="Executing migration" id="Add preferences index org_id" policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,808] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-23T11:59:40.84409436Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=883.505µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) kafka | [2024-01-23 11:59:44,809] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-23T11:59:40.849939195Z level=info msg="Executing migration" id="Add preferences index user_id" policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,812] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-23T11:59:40.851165697Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.226452ms policy-db-migrator | kafka | [2024-01-23 11:59:44,812] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-23T11:59:40.854940878Z level=info msg="Executing migration" id="create alert table v1" policy-db-migrator | kafka | [2024-01-23 11:59:44,813] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-01-23T11:59:40.856713587Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.772679ms kafka | [2024-01-23 11:59:44,813] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.86051574Z level=info msg="Executing migration" id="add index alert org_id & id " kafka | [2024-01-23 11:59:44,815] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-01-23T11:59:40.861817285Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.294326ms kafka | [2024-01-23 11:59:44,815] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.86586613Z level=info msg="Executing migration" id="add index alert state" kafka | [2024-01-23 11:59:44,817] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.866953245Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.086905ms kafka | [2024-01-23 11:59:44,823] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.870109394Z level=info msg="Executing migration" id="add index alert dashboard_id" kafka | [2024-01-23 11:59:44,826] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-01-23T11:59:40.871100795Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=990.88µs kafka | [2024-01-23 11:59:44,828] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-pap | security.providers = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.875967531Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" kafka | [2024-01-23 11:59:44,828] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-23T11:59:40.876641545Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=674.635µs kafka | [2024-01-23 11:59:44,828] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.880170873Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" kafka | [2024-01-23 11:59:44,829] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.9:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) grafana | logger=migrator t=2024-01-23T11:59:40.881466999Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.299625ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | [2024-01-23 11:59:44,829] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-01-23T11:59:40.886693303Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | kafka | [2024-01-23 11:59:44,829] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.887793708Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.100685ms policy-db-migrator | > upgrade 0570-toscadatatype.sql kafka | [2024-01-23 11:59:44,830] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.892796271Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,831] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-23T11:59:40.906057471Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.26012ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-23T11:59:40.908911436Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" policy-db-migrator | -------------- kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-23T11:59:40.909369149Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=457.653µs policy-db-migrator | kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-23T11:59:40.912117808Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" policy-db-migrator | kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-23T11:59:40.913217423Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.098715ms policy-db-migrator | > upgrade 0580-toscadatatypes.sql kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:127) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-23T11:59:40.918817006Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,833] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-23T11:59:40.919157203Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=339.937µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) kafka | [2024-01-23 11:59:44,840] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-23T11:59:40.921695802Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,841] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-23T11:59:40.922142324Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=446.512µs policy-db-migrator | kafka | [2024-01-23 11:59:44,841] INFO Kafka startTimeMs: 1706011184832 (org.apache.kafka.common.utils.AppInfoParser) policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | [2024-01-23 11:59:44,843] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) grafana | logger=migrator t=2024-01-23T11:59:40.924917275Z level=info msg="Executing migration" id="create alert_notification table v1" policy-pap | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql kafka | [2024-01-23 11:59:44,856] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.926299814Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.38172ms policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | [2024-01-23 11:59:44,937] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-01-23T11:59:40.932648445Z level=info msg="Executing migration" id="Add column is_default" policy-pap | ssl.provider = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-23 11:59:45,027] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:40.938148633Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.495628ms policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- kafka | [2024-01-23 11:59:45,112] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-23T11:59:40.944501384Z level=info msg="Executing migration" id="Add column frequency" policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | kafka | [2024-01-23 11:59:45,112] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-23T11:59:40.949788742Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.286087ms policy-pap | ssl.truststore.certificates = null policy-db-migrator | kafka | [2024-01-23 11:59:49,857] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.954320261Z level=info msg="Executing migration" id="Add column send_reminder" policy-pap | ssl.truststore.location = null policy-db-migrator | > upgrade 0600-toscanodetemplate.sql kafka | [2024-01-23 11:59:49,858] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.958591797Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.271135ms policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,628] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.962053902Z level=info msg="Executing migration" id="Add column disable_resolve_message" policy-pap | ssl.truststore.type = JKS policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) kafka | [2024-01-23 12:00:13,639] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.966188551Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.129689ms policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,642] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-01-23T11:59:40.971457037Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-pap | policy-db-migrator | kafka | [2024-01-23 12:00:13,642] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-01-23T11:59:40.97271706Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.313876ms policy-pap | [2024-01-23T12:00:13.104+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | kafka | [2024-01-23 12:00:13,720] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(UZhXoIGVRReKBLH6iRv9pA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(y4LhsVCjShWp08qTM9318g),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.975974095Z level=info msg="Executing migration" id="Update alert table charset" policy-pap | [2024-01-23T12:00:13.104+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | > upgrade 0610-toscanodetemplates.sql kafka | [2024-01-23 12:00:13,722] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-23T11:59:40.976089491Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=115.316µs policy-pap | [2024-01-23T12:00:13.104+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011213104 policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:40.980130565Z level=info msg="Executing migration" id="Update alert_notification table charset" policy-pap | [2024-01-23T12:00:13.105+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:40.980185938Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=60.253µs policy-pap | [2024-01-23T12:00:13.105+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:40.984398941Z level=info msg="Executing migration" id="create notification_journal table v1" policy-pap | [2024-01-23T12:00:13.105+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a4c34505-3ec0-419b-8744-c011170ffba7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@712c9bcf policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:40.985387551Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=988.75µs policy-pap | [2024-01-23T12:00:13.105+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a4c34505-3ec0-419b-8744-c011170ffba7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:40.989643876Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" policy-pap | [2024-01-23T12:00:13.106+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql kafka | [2024-01-23 12:00:13,726] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:40.990693449Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.049183ms policy-pap | allow.auto.create.topics = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:40.996175996Z level=info msg="Executing migration" id="drop alert_notification_journal" kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-23T11:59:40.997895323Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.681055ms kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.001891925Z level=info msg="Executing migration" id="create alert_notification_state table v1" kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | auto.offset.reset = latest policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.002651363Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=759.348µs kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.00794646Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | check.crcs = true policy-db-migrator | > upgrade 0630-toscanodetype.sql grafana | logger=migrator t=2024-01-23T11:59:41.009808553Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.864714ms kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.014388662Z level=info msg="Executing migration" id="Add for to alert table" kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.id = consumer-policy-pap-4 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) grafana | logger=migrator t=2024-01-23T11:59:41.020696488Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=6.306776ms kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.rack = policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.068841378Z level=info msg="Executing migration" id="Add column uid in alert_notification" kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-23T11:59:41.075009777Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=6.169869ms policy-db-migrator | kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-01-23T11:59:41.079870661Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-db-migrator | > upgrade 0640-toscanodetypes.sql kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-01-23T11:59:41.080017068Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=147.127µs policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-01-23T11:59:41.08265894Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-01-23T11:59:41.083307853Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=648.753µs policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-01-23T11:59:41.089525454Z level=info msg="Executing migration" id="Remove unique index org_id_name" policy-db-migrator | kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-01-23T11:59:41.090118564Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=591.67µs policy-db-migrator | kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-23T11:59:41.095744195Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-23T11:59:41.102251041Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.502196ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,727] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-23T11:59:41.105300024Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-23T11:59:41.105346466Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=47.062µs policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-01-23T11:59:41.111741156Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-db-migrator | kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-01-23T11:59:41.113067883Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.326607ms policy-db-migrator | kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-23T11:59:41.11640513Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-db-migrator | > upgrade 0660-toscaparameter.sql kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-23T11:59:41.118039802Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.634171ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-23T11:59:41.122313506Z level=info msg="Executing migration" id="Drop old annotation table v4" kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.poll.records = 500 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-23T11:59:41.122480194Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=167.379µs kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.128527557Z level=info msg="Executing migration" id="create annotation table v5" kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.129508876Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=981.039µs kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.135662944Z level=info msg="Executing migration" id="add index annotation 0 v3" kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | > upgrade 0670-toscapolicies.sql grafana | logger=migrator t=2024-01-23T11:59:41.136853814Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.197169ms kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.143344359Z level=info msg="Executing migration" id="add index annotation 1 v3" kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) grafana | logger=migrator t=2024-01-23T11:59:41.144421022Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.082254ms kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.147848144Z level=info msg="Executing migration" id="add index annotation 2 v3" kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.148855354Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.026271ms kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.152195822Z level=info msg="Executing migration" id="add index annotation 3 v3" kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql grafana | logger=migrator t=2024-01-23T11:59:41.154319678Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=2.123336ms kafka | [2024-01-23 12:00:13,728] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.160635134Z level=info msg="Executing migration" id="add index annotation 4 v3" kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-23T11:59:41.161994702Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.360018ms kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.165034654Z level=info msg="Executing migration" id="Update annotation table charset" kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.165062656Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=29.482µs policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.168086587Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0690-toscapolicy.sql grafana | logger=migrator t=2024-01-23T11:59:41.173064856Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.977739ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.176277017Z level=info msg="Executing migration" id="Drop category_id index" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) grafana | logger=migrator t=2024-01-23T11:59:41.176960752Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=684.164µs policy-pap | sasl.login.callback.handler.class = null kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.182108829Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-pap | sasl.login.class = null kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.185438236Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.333477ms policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-01-23 12:00:13,729] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.189482358Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0700-toscapolicytype.sql grafana | logger=migrator t=2024-01-23T11:59:41.190190704Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=707.486µs policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.193084499Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) grafana | logger=migrator t=2024-01-23T11:59:41.194065048Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=980.089µs policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-23T11:59:41.199638157Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-23T11:59:41.200360723Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=721.926µs policy-db-migrator | kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-23T11:59:41.203653308Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-db-migrator | kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-23T11:59:41.224416887Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=20.762919ms policy-db-migrator | > upgrade 0710-toscapolicytypes.sql kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-23T11:59:41.229013588Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-23T11:59:41.230174026Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.152478ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) kafka | [2024-01-23 12:00:13,735] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-23T11:59:41.236710823Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-23T11:59:41.238437449Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.726606ms policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.242227199Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.243223579Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=995.46µs policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.246050291Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.246735595Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=684.145µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.251982288Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.252198528Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=212.661µs policy-db-migrator | policy-pap | security.protocol = PLAINTEXT kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.2558248Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.263444921Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=7.619831ms policy-db-migrator | > upgrade 0730-toscaproperty.sql kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.266532936Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.270765938Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.232692ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.273867473Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.274826351Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=962.098µs policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.279619281Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-23T11:59:41.280658223Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.036772ms policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.284076474Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.284463194Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=386.87µs kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) grafana | logger=migrator t=2024-01-23T11:59:41.287894745Z level=info msg="Executing migration" id="Add epoch_end column" policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-23T11:59:41.292533998Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.640353ms kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.29797051Z level=info msg="Executing migration" id="Add index for epoch_end" kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.299016302Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.045632ms kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql grafana | logger=migrator t=2024-01-23T11:59:41.304100107Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.3043663Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=265.323µs kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-01-23T11:59:41.313887487Z level=info msg="Executing migration" id="Move region to single row" kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.314795352Z level=info msg="Migration successfully executed" id="Move region to single row" duration=908.135µs policy-pap | ssl.provider = null kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.320496118Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.321717489Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.221431ms policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-01-23T11:59:41.324837625Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-23T11:59:41.326209144Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.371049ms kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-01-23T11:59:41.329845446Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.password = null kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-23T11:59:41.330851356Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.00577ms policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.336150391Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" kafka | [2024-01-23 12:00:13,736] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | > upgrade 0770-toscarequirement.sql grafana | logger=migrator t=2024-01-23T11:59:41.338024085Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.874794ms policy-pap | policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.342243906Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-01-23T12:00:13.110+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-23T11:59:41.343762183Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.525066ms kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-01-23T12:00:13.110+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.354351683Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-pap | [2024-01-23T12:00:13.110+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011213110 policy-db-migrator | kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.356363313Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=2.005991ms policy-db-migrator | policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.359978984Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.36009971Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=120.166µs policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=a4c34505-3ec0-419b-8744-c011170ffba7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-23 12:00:13,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.36348937Z level=info msg="Executing migration" id="create test_data table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=7faaa365-1216-4c85-9c2d-e9bca189fc3d, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-23 12:00:13,737] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.364473129Z level=info msg="Migration successfully executed" id="create test_data table" duration=983.609µs policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:13.111+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e3f1829a-7c06-43ff-a52c-f9eb795609b7, alive=false, publisher=null]]: starting kafka | [2024-01-23 12:00:13,892] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.369529583Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-db-migrator | policy-pap | [2024-01-23T12:00:13.129+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.371235238Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.705326ms policy-db-migrator | policy-pap | acks = -1 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.378144014Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-pap | auto.include.jmx.reporter = true kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.37907089Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=920.516µs policy-db-migrator | -------------- policy-pap | batch.size = 16384 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.382446989Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.383476861Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.029492ms policy-db-migrator | -------------- policy-pap | buffer.memory = 33554432 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.390175786Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-db-migrator | policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.390488432Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=312.916µs policy-db-migrator | policy-pap | client.id = producer-1 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.397535375Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-pap | compression.type = none kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.398210819Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=675.184µs policy-db-migrator | -------------- policy-pap | connections.max.idle.ms = 540000 kafka | [2024-01-23 12:00:13,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.402808889Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-pap | delivery.timeout.ms = 120000 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:41.403170167Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=361.248µs policy-pap | enable.idempotence = true kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.407169587Z level=info msg="Executing migration" id="create team table" policy-pap | interceptor.classes = [] kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:41.407931845Z level=info msg="Migration successfully executed" id="create team table" duration=761.868µs policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql grafana | logger=migrator t=2024-01-23T11:59:41.413727755Z level=info msg="Executing migration" id="add index team.org_id" policy-pap | linger.ms = 0 kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.414783528Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.054793ms policy-pap | max.block.ms = 60000 policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.419323866Z level=info msg="Executing migration" id="add unique index team_org_id_name" policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.421520356Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=2.19534ms policy-pap | max.request.size = 1048576 policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.429281704Z level=info msg="Executing migration" id="Add column uid in team" policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.434701175Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.420391ms policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | kafka | [2024-01-23 12:00:13,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.501582114Z level=info msg="Executing migration" id="Update uid column values in team" policy-pap | metric.reporters = [] kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.502039937Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=473.464µs policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-pap | metrics.num.samples = 2 kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:41.514250178Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-23T11:59:41.515379105Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.137637ms kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:41.633699898Z level=info msg="Executing migration" id="create team member table" kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-01-23T11:59:41.634509999Z level=info msg="Migration successfully executed" id="create team member table" duration=813.761µs policy-db-migrator | kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-01-23T11:59:41.719594279Z level=info msg="Executing migration" id="add index team_member.org_id" policy-db-migrator | kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-01-23T11:59:41.720338356Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=739.407µs policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-01-23T11:59:41.818411626Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-01-23T11:59:41.820023137Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.613551ms policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-23T11:59:41.948042186Z level=info msg="Executing migration" id="add index team_member.team_id" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-23T11:59:41.949669127Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.629261ms policy-db-migrator | kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:42.014745849Z level=info msg="Executing migration" id="Add column email to team table" policy-db-migrator | kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | retries = 2147483647 grafana | logger=migrator t=2024-01-23T11:59:42.021126865Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=6.382666ms policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-23T11:59:42.069996122Z level=info msg="Executing migration" id="Add column external to team_member table" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-23T11:59:42.076307074Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.313222ms policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-23T11:59:42.081957104Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-23T11:59:42.08653612Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.589977ms policy-db-migrator | kafka | [2024-01-23 12:00:13,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-23T11:59:42.097061701Z level=info msg="Executing migration" id="create dashboard acl table" policy-db-migrator | policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.098579716Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.519065ms policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.103197495Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.104706019Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.520405ms policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-pap | sasl.login.callback.handler.class = null kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-01-23T11:59:42.10997837Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-23T11:59:42.110968639Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=990.109µs policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.114172008Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.115084143Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=911.846µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-23 12:00:13,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.119439428Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-23 12:00:13,898] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.12029317Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=859.582µs policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-23 12:00:13,898] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.127697387Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-23 12:00:13,898] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.12918069Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.483344ms policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | sasl.mechanism = GSSAPI kafka | [2024-01-23 12:00:13,900] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.136035419Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.137183226Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.147527ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.142677818Z level=info msg="Executing migration" id="add index dashboard_permission" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.144215614Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.537536ms policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.150436081Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.151092654Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=652.592µs policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.156222608Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.156543124Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=318.575µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.160173923Z level=info msg="Executing migration" id="create tag table" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.160906919Z level=info msg="Migration successfully executed" id="create tag table" duration=732.816µs policy-db-migrator | policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-23T11:59:42.165756759Z level=info msg="Executing migration" id="add index tag.key_value" kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-23T11:59:42.167247653Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.490864ms kafka | [2024-01-23 12:00:13,901] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | security.providers = null grafana | logger=migrator t=2024-01-23T11:59:42.172198558Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-23T11:59:42.173725134Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.532455ms kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.181069187Z level=info msg="Executing migration" id="add index login_attempt.username" policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-23T11:59:42.182273486Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.204309ms policy-db-migrator | kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.18740795Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.1892256Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.81731ms kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.192803207Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-01-23T11:59:42.212606097Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.80441ms policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.220790652Z level=info msg="Executing migration" id="create login_attempt v2" policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,902] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.221681716Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=891.104µs policy-pap | ssl.key.password = null policy-db-migrator | kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.22722507Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-01-23T11:59:42.228994298Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.768768ms policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-23T11:59:42.232585405Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-23T11:59:42.233397706Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=839.511µs kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-23T11:59:42.23873517Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-23T11:59:42.239445925Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=710.556µs kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-23T11:59:42.242908246Z level=info msg="Executing migration" id="create user auth table" kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | policy-pap | ssl.provider = null grafana | logger=migrator t=2024-01-23T11:59:42.244230931Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.322075ms policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-23T11:59:42.247910633Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.249790536Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.879343ms kafka | [2024-01-23 12:00:13,903] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-01-23T11:59:42.258275866Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.258382682Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=103.745µs kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.261879055Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.27048467Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.615316ms kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-01-23T11:59:42.273599674Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | transactional.id = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.278722238Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.122314ms kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-01-23T11:59:42.282020101Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.287383996Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.363455ms kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | [2024-01-23T12:00:13.141+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.29272306Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.29816501Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.441009ms kafka | [2024-01-23 12:00:13,904] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-01-23T11:59:42.30323522Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011213161 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.304253531Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.020951ms kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e3f1829a-7c06-43ff-a52c-f9eb795609b7, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-01-23T11:59:42.313486788Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-01-23T12:00:13.161+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=de7a1a5b-b823-4e66-b4fb-feb25d317168, alive=false, publisher=null]]: starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.321126565Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.637958ms kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-01-23T12:00:13.162+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.325547454Z level=info msg="Executing migration" id="create server_lock table" kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | acks = -1 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.326301511Z level=info msg="Migration successfully executed" id="create server_lock table" duration=753.927µs kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-01-23T11:59:42.329800785Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | batch.size = 16384 policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.330790964Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=984.569µs policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.335927328Z level=info msg="Executing migration" id="create user auth token table" policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,905] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.337177329Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.250302ms policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | kafka | [2024-01-23 12:00:13,907] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.344512022Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | client.id = producer-2 kafka | [2024-01-23 12:00:13,911] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.346780275Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.262072ms policy-db-migrator | policy-pap | compression.type = none kafka | [2024-01-23 12:00:13,925] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.418344055Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | connections.max.idle.ms = 540000 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.420400206Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.056821ms policy-db-migrator | -------------- policy-pap | delivery.timeout.ms = 120000 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.424941061Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | enable.idempotence = true kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | interceptor.classes = [] kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.426151961Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.21161ms policy-db-migrator | policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.430002811Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-db-migrator | policy-pap | linger.ms = 0 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.435936215Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.933544ms policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-pap | max.block.ms = 60000 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.440507961Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-db-migrator | -------------- policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.441760693Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.252442ms policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | max.request.size = 1048576 kafka | [2024-01-23 12:00:13,927] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.445377422Z level=info msg="Executing migration" id="create cache_data table" policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-23 12:00:13,930] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.446416703Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.038101ms policy-db-migrator | policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-01-23 12:00:13,930] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.45181451Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-db-migrator | policy-pap | metric.reporters = [] kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.454175987Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=2.362037ms policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | metrics.num.samples = 2 kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.511358336Z level=info msg="Executing migration" id="create short_url table v1" policy-db-migrator | -------------- policy-pap | metrics.recording.level = INFO kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-23T11:59:42.512530244Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.176878ms policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.517417296Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-01-23 12:00:13,931] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.518546392Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.129106ms policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.522881716Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" policy-pap | partitioner.class = null kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-01-23T11:59:42.523234914Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=352.807µs policy-pap | partitioner.ignore.keys = false kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.528504274Z level=info msg="Executing migration" id="delete alert_definition table" policy-pap | receive.buffer.bytes = 32768 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-23T11:59:42.528834121Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=329.286µs policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.532881201Z level=info msg="Executing migration" id="recreate alert_definition table" policy-pap | reconnect.backoff.ms = 50 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | request.timeout.ms = 30000 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.534348813Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.466952ms policy-pap | retries = 2147483647 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-01-23T11:59:42.538279478Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-pap | retry.backoff.ms = 100 kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.540169531Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.889293ms policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.545719356Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-pap | sasl.jaas.config = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-23 12:00:13,932] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.547688013Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.968447ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.552461609Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.552741063Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=283.194µs policy-pap | sasl.kerberos.service.name = null policy-db-migrator | kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.556815865Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.558598423Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.782538ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.562261484Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.563325967Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.064983ms policy-pap | sasl.login.class = null policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.567607039Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.568718634Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.111225ms policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.572123942Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" kafka | [2024-01-23 12:00:13,933] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-01-23T11:59:42.573274659Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.150347ms kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.577412284Z level=info msg="Executing migration" id="Add column paused in alert_definition" kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-23T11:59:42.587511443Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=10.09536ms kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.590720552Z level=info msg="Executing migration" id="drop alert_definition table" kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.59149176Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=770.648µs kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.594569813Z level=info msg="Executing migration" id="delete alert_definition_version table" kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-01-23T11:59:42.594661627Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=91.415µs kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.59916559Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.60059099Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.42476ms policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-23T11:59:42.604308164Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-23T11:59:42.606016959Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.708485ms policy-db-migrator | kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-23T11:59:42.609402706Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-db-migrator | kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-23T11:59:42.610514961Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.111685ms policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-23T11:59:42.614452136Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-23T11:59:42.614563862Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=113.545µs policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-23 12:00:13,934] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-23T11:59:42.617437824Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,935] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.618476915Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.038571ms kafka | [2024-01-23 12:00:13,936] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.625132294Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.626683701Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.550717ms kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-23T11:59:42.631751292Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | security.providers = null grafana | logger=migrator t=2024-01-23T11:59:42.63313583Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.385298ms kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-23T11:59:42.636856084Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-23T11:59:42.637999941Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.143107ms kafka | [2024-01-23 12:00:13,938] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-23T11:59:42.642741226Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-23T11:59:42.650035606Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.292691ms kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0100-pdp.sql grafana | logger=migrator t=2024-01-23T11:59:42.653363511Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.654128059Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=764.648µs policy-pap | ssl.engine.factory.class = null kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY grafana | logger=migrator t=2024-01-23T11:59:42.657398811Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | ssl.key.password = null kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.658120246Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=718.965µs policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.665857189Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.706148472Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=40.287513ms policy-pap | ssl.keystore.key = null kafka | [2024-01-23 12:00:13,939] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-01-23T11:59:42.71581513Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | ssl.keystore.location = null kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.753332156Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=37.514586ms policy-pap | ssl.keystore.password = null kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-01-23T11:59:42.759355004Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | ssl.keystore.type = JKS kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.760059689Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=704.185µs policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.763021496Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-pap | ssl.provider = null kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.763942241Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=920.635µs policy-pap | ssl.secure.random.implementation = null policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql grafana | logger=migrator t=2024-01-23T11:59:42.769074345Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.777707262Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.633997ms policy-pap | ssl.truststore.certificates = null kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-01-23T11:59:42.780595855Z level=info msg="Executing migration" id="create alert_rule table" policy-pap | ssl.truststore.location = null kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.781204685Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=608.74µs policy-pap | ssl.truststore.password = null kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.78573985Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-pap | ssl.truststore.type = JKS kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.787383011Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.642462ms policy-pap | transaction.timeout.ms = 60000 kafka | [2024-01-23 12:00:13,940] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0130-pdpstatistics.sql grafana | logger=migrator t=2024-01-23T11:59:42.822960911Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-pap | transactional.id = null kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.824553Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.591418ms policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL grafana | logger=migrator t=2024-01-23T11:59:42.829600679Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.830653481Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.052482ms policy-pap | [2024-01-23T12:00:13.163+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.834012478Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-23 12:00:13,941] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.834080271Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=67.514µs policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql grafana | logger=migrator t=2024-01-23T11:59:42.839363442Z level=info msg="Executing migration" id="add column for to alert_rule" policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706011213166 kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.845159739Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.797677ms kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=de7a1a5b-b823-4e66-b4fb-feb25d317168, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-01-23T11:59:42.848392099Z level=info msg="Executing migration" id="add column annotations to alert_rule" kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.854222647Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.828718ms kafka | [2024-01-23 12:00:13,942] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-23T12:00:13.166+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.859400644Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-01-23 12:00:13,943] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-23T12:00:13.169+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:42.865170659Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.769576ms policy-pap | [2024-01-23T12:00:13.170+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) kafka | [2024-01-23 12:00:13,945] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.870535794Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,945] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-23T12:00:13.186+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-db-migrator | kafka | [2024-01-23 12:00:13,945] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.871421528Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=885.694µs policy-pap | [2024-01-23T12:00:13.187+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-db-migrator | kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.873910901Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-pap | [2024-01-23T12:00:13.189+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-db-migrator | > upgrade 0150-pdpstatistics.sql kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.875430537Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.535236ms policy-pap | [2024-01-23T12:00:13.190+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.904789609Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-pap | [2024-01-23T12:00:13.190+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.914081529Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.296239ms policy-pap | [2024-01-23T12:00:13.190+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.922584989Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-pap | [2024-01-23T12:00:13.192+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.932069728Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=9.486279ms policy-pap | [2024-01-23T12:00:13.195+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.552 seconds (process running for 11.168) policy-db-migrator | kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.935598693Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-pap | [2024-01-23T12:00:13.611+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.936602943Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.000639ms policy-pap | [2024-01-23T12:00:13.611+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: sXWmytVdQyKDGijCKdambA policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.940217391Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-pap | [2024-01-23T12:00:13.611+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: sXWmytVdQyKDGijCKdambA policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.947368185Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.150294ms policy-pap | [2024-01-23T12:00:13.611+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Cluster ID: sXWmytVdQyKDGijCKdambA policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,946] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.951981023Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-pap | [2024-01-23T12:00:13.648+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-db-migrator | kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.957939578Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.957875ms policy-pap | [2024-01-23T12:00:13.649+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-db-migrator | kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.963390228Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-pap | [2024-01-23T12:00:13.709+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.963460281Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=69.653µs policy-pap | [2024-01-23T12:00:13.710+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: sXWmytVdQyKDGijCKdambA policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.967571135Z level=info msg="Executing migration" id="create alert_rule_version table" policy-pap | [2024-01-23T12:00:13.720+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | UPDATE jpapdpstatistics_enginestats a kafka | [2024-01-23 12:00:13,995] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.969584084Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=2.012489ms policy-pap | [2024-01-23T12:00:13.824+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | JOIN pdpstatistics b kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | [2024-01-23T12:00:13.861+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-23T11:59:42.974434974Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.975458595Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.023361ms policy-pap | [2024-01-23T12:00:13.941+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | SET a.id = b.id kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.978869794Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-pap | [2024-01-23T12:00:13.975+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.979949747Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.079473ms policy-pap | [2024-01-23T12:00:14.048+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:42.98405878Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-db-migrator | kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | [2024-01-23T12:00:14.086+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | [2024-01-23T12:00:14.154+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-23T11:59:42.984122883Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=64.503µs policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:14.192+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp kafka | [2024-01-23 12:00:13,996] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.988434167Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-pap | [2024-01-23T12:00:14.260+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-23 12:00:13,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:42.994641254Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.206537ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:13,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-23T12:00:14.299+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-23T11:59:42.999894984Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" kafka | [2024-01-23 12:00:13,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-01-23 12:00:13,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-01-23T12:00:14.371+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-23T11:59:43.004781435Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.886961ms kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | [2024-01-23T12:00:14.404+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.007929141Z level=info msg="Executing migration" id="add column labels to alert_rule_version" kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-01-23T12:00:14.476+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-01-23T11:59:43.012318658Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.387697ms kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | [2024-01-23T12:00:14.512+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.016755458Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) grafana | logger=migrator t=2024-01-23T11:59:43.022886321Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.130323ms kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-01-23T12:00:14.586+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.026437827Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-01-23T12:00:14.619+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.033051634Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.612997ms kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | [2024-01-23T12:00:14.704+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.038580707Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | [2024-01-23T12:00:14.712+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] (Re-)joining group policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql grafana | logger=migrator t=2024-01-23T11:59:43.038723495Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=140.767µs kafka | [2024-01-23 12:00:13,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | [2024-01-23T12:00:14.726+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.042957284Z level=info msg="Executing migration" id=create_alert_configuration_table kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | [2024-01-23T12:00:14.728+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) grafana | logger=migrator t=2024-01-23T11:59:43.043725192Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=767.778µs kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | [2024-01-23T12:00:14.756+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.047276938Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.053938997Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.661119ms policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.05824888Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-pap | [2024-01-23T12:00:14.756+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | > upgrade 0210-sequence.sql grafana | logger=migrator t=2024-01-23T11:59:43.058320914Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=73.034µs policy-pap | [2024-01-23T12:00:14.756+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.061776635Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-pap | [2024-01-23T12:00:14.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Request joining group due to: need to re-join with the given member-id: consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-01-23T11:59:43.07258955Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.811835ms policy-pap | [2024-01-23T12:00:14.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.080453519Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-pap | [2024-01-23T12:00:14.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] (Re-)joining group kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.081182805Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=728.906µs policy-pap | [2024-01-23T12:00:17.785+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b', protocol='range'} kafka | [2024-01-23 12:00:13,999] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.084487918Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-pap | [2024-01-23T12:00:17.787+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Successfully joined group with generation Generation{generationId=1, memberId='consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a', protocol='range'} kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | > upgrade 0220-sequence.sql grafana | logger=migrator t=2024-01-23T11:59:43.090960349Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.4712ms policy-pap | [2024-01-23T12:00:17.799+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Finished assignment for group at generation 1: {consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.094853151Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-pap | [2024-01-23T12:00:17.799+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-01-23T11:59:43.095615539Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=762.798µs kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-pap | [2024-01-23T12:00:17.841+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Successfully synced group in generation Generation{generationId=1, memberId='consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.100593845Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | [2024-01-23T12:00:17.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b', protocol='range'} policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.102154292Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.558497ms kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-01-23T12:00:17.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.106053335Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" kafka | [2024-01-23 12:00:14,000] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | [2024-01-23T12:00:17.843+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-01-23T11:59:43.112735836Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.683781ms kafka | [2024-01-23 12:00:14,001] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | [2024-01-23T12:00:17.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.118723382Z level=info msg="Executing migration" id="create provenance_type table" policy-pap | [2024-01-23T12:00:17.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) kafka | [2024-01-23 12:00:14,002] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.119421657Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=698.675µs policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,002] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.123813874Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-pap | [2024-01-23T12:00:17.872+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | kafka | [2024-01-23 12:00:14,002] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.12616032Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.343356ms policy-pap | [2024-01-23T12:00:17.874+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | kafka | [2024-01-23 12:00:14,002] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.131948936Z level=info msg="Executing migration" id="create alert_image table" policy-pap | [2024-01-23T12:00:17.898+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql kafka | [2024-01-23 12:00:14,003] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.133196628Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.248972ms policy-pap | [2024-01-23T12:00:17.898+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3, groupId=7faaa365-1216-4c85-9c2d-e9bca189fc3d] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,003] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.136418887Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-pap | [2024-01-23T12:00:22.059+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-01-23 12:00:14,004] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.137512692Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.091524ms policy-pap | [2024-01-23T12:00:22.059+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,004] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.142291938Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-pap | [2024-01-23T12:00:22.062+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms policy-db-migrator | kafka | [2024-01-23 12:00:14,005] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.142359861Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=67.273µs policy-pap | [2024-01-23T12:00:34.791+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-db-migrator | kafka | [2024-01-23 12:00:14,006] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-01-23T11:59:43.145473405Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-pap | [] policy-db-migrator | > upgrade 0120-toscatrigger.sql kafka | [2024-01-23 12:00:14,006] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.146620672Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.147867ms policy-pap | [2024-01-23T12:00:34.792+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,061] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"467b6bf8-582b-4dbd-92b4-9e245489db39","timestampMs":1706011234753,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} policy-pap | [2024-01-23T12:00:34.792+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-23T11:59:43.149691044Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-01-23 12:00:14,072] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"467b6bf8-582b-4dbd-92b4-9e245489db39","timestampMs":1706011234753,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-01-23T11:59:43.150667882Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=976.748µs kafka | [2024-01-23 12:00:14,073] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:34.800+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-01-23T11:59:43.157671599Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-01-23 12:00:14,074] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-23T12:00:34.878+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting grafana | logger=migrator t=2024-01-23T11:59:43.15810923Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-01-23 12:00:14,076] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-23T12:00:34.878+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting listener grafana | logger=migrator t=2024-01-23T11:59:43.162913088Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-01-23 12:00:14,088] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | [2024-01-23T12:00:34.879+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting timer grafana | logger=migrator t=2024-01-23T11:59:43.163402212Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=488.734µs kafka | [2024-01-23 12:00:14,090] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:34.879+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c, expireMs=1706011264879] grafana | logger=migrator t=2024-01-23T11:59:43.166775659Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" kafka | [2024-01-23 12:00:14,090] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | [2024-01-23T12:00:34.881+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c, expireMs=1706011264879] grafana | logger=migrator t=2024-01-23T11:59:43.168061153Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.284364ms kafka | [2024-01-23 12:00:14,090] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:34.881+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting enqueue grafana | logger=migrator t=2024-01-23T11:59:43.173003447Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-01-23 12:00:14,090] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-23T12:00:34.882+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate started grafana | logger=migrator t=2024-01-23T11:59:43.181401403Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.398596ms kafka | [2024-01-23 12:00:14,099] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-23T12:00:34.884+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:43.185371539Z level=info msg="Executing migration" id="create library_element table v1" kafka | [2024-01-23 12:00:14,100] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.186337167Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=965.208µs kafka | [2024-01-23 12:00:14,100] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:34.932+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-23T11:59:43.19024543Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" kafka | [2024-01-23 12:00:14,100] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.192416008Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.169577ms kafka | [2024-01-23 12:00:14,100] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:34.934+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:43.197895319Z level=info msg="Executing migration" id="create library_element_connection table v1" kafka | [2024-01-23 12:00:14,108] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.198720219Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=824.55µs kafka | [2024-01-23 12:00:14,108] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-23T12:00:34.935+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-23T11:59:43.203923077Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-01-23 12:00:14,108] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-pap | [2024-01-23T12:00:34.935+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-23T11:59:43.205842532Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.920205ms kafka | [2024-01-23 12:00:14,108] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:34.957+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-23T11:59:43.247566156Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-01-23 12:00:14,109] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d2388848-0012-45a5-abaf-541938745a99","timestampMs":1706011234943,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-01-23T11:59:43.249364925Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.795919ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,117] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:34.960+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:43.255718129Z level=info msg="Executing migration" id="increase max description length to 2048" policy-db-migrator | kafka | [2024-01-23 12:00:14,117] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d2388848-0012-45a5-abaf-541938745a99","timestampMs":1706011234943,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-01-23T11:59:43.25574502Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=27.821µs policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,117] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.961+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-01-23T11:59:43.259231863Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-01-23 12:00:14,118] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.961+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:43.259297456Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=66.343µs policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,118] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.262728536Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f2f3d3ad-4c80-4136-9424-630cff59eb41","timestampMs":1706011234944,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-01-23 12:00:14,127] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.263071413Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=342.677µs policy-pap | [2024-01-23T12:00:34.982+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,128] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.265539965Z level=info msg="Executing migration" id="create data_keys table" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f2f3d3ad-4c80-4136-9424-630cff59eb41","timestampMs":1706011234944,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.26645839Z level=info msg="Migration successfully executed" id="create data_keys table" duration=918.205µs policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-01-23 12:00:14,128] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.982+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping grafana | logger=migrator t=2024-01-23T11:59:43.270825276Z level=info msg="Executing migration" id="create secrets table" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,129] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.982+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c grafana | logger=migrator t=2024-01-23T11:59:43.271625966Z level=info msg="Migration successfully executed" id="create secrets table" duration=800.08µs policy-db-migrator | kafka | [2024-01-23 12:00:14,129] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:34.982+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping enqueue grafana | logger=migrator t=2024-01-23T11:59:43.277529128Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-db-migrator | kafka | [2024-01-23 12:00:14,135] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:34.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping timer grafana | logger=migrator t=2024-01-23T11:59:43.32648618Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=48.956212ms policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-01-23 12:00:14,136] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:34.983+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c, expireMs=1706011264879] grafana | logger=migrator t=2024-01-23T11:59:43.335471544Z level=info msg="Executing migration" id="add name column into data_keys" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,136] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping listener grafana | logger=migrator t=2024-01-23T11:59:43.342797907Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.328903ms policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-01-23 12:00:14,136] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopped grafana | logger=migrator t=2024-01-23T11:59:43.348195984Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,136] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate successful grafana | logger=migrator t=2024-01-23T11:59:43.348341271Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=145.657µs policy-db-migrator | kafka | [2024-01-23 12:00:14,150] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 start publishing next request grafana | logger=migrator t=2024-01-23T11:59:43.351781671Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,152] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange starting grafana | logger=migrator t=2024-01-23T11:59:43.397886832Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=46.105201ms policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-01-23 12:00:14,152] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange starting listener grafana | logger=migrator t=2024-01-23T11:59:43.401209296Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,152] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange starting timer grafana | logger=migrator t=2024-01-23T11:59:43.44655311Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=45.343194ms policy-db-migrator | kafka | [2024-01-23 12:00:14,152] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=d5747120-881d-4c2e-9c54-68eb2a8c3ec9, expireMs=1706011264991] grafana | logger=migrator t=2024-01-23T11:59:43.451942496Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | kafka | [2024-01-23 12:00:14,162] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange starting enqueue policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-01-23 12:00:14,163] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange started grafana | logger=migrator t=2024-01-23T11:59:43.453184918Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.242812ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,163] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.991+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=d5747120-881d-4c2e-9c54-68eb2a8c3ec9, expireMs=1706011264991] grafana | logger=migrator t=2024-01-23T11:59:43.458822747Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-01-23 12:00:14,163] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:34.992+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:43.460876078Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.058492ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,163] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.464315378Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-db-migrator | kafka | [2024-01-23 12:00:14,170] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-23 12:00:14,171] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:35.003+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.464779011Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=463.053µs kafka | [2024-01-23 12:00:14,171] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) grafana | logger=migrator t=2024-01-23T11:59:43.469381949Z level=info msg="Executing migration" id="create permission table" kafka | [2024-01-23 12:00:14,171] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:35.003+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.470277533Z level=info msg="Migration successfully executed" id="create permission table" duration=894.894µs kafka | [2024-01-23 12:00:14,172] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:35.019+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.477874889Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-01-23 12:00:14,178] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9ca47fc4-4a5a-4269-ac4b-8ea5170943ca","timestampMs":1706011235008,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.479613935Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.736016ms kafka | [2024-01-23 12:00:14,181] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:35.020+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d5747120-881d-4c2e-9c54-68eb2a8c3ec9 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql kafka | [2024-01-23 12:00:14,181] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:35.037+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:43.483128449Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,181] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","timestampMs":1706011234863,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.484990621Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.862082ms policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT kafka | [2024-01-23 12:00:14,181] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:35.037+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE grafana | logger=migrator t=2024-01-23T11:59:43.492541945Z level=info msg="Executing migration" id="create role table" policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:35.041+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-23 12:00:14,195] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.493392237Z level=info msg="Migration successfully executed" id="create role table" duration=849.322µs policy-db-migrator | policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d5747120-881d-4c2e-9c54-68eb2a8c3ec9","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9ca47fc4-4a5a-4269-ac4b-8ea5170943ca","timestampMs":1706011235008,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-23 12:00:14,196] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.498694389Z level=info msg="Executing migration" id="add column display_name" policy-db-migrator | policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopping kafka | [2024-01-23 12:00:14,196] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.510191368Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.499569ms policy-db-migrator | > upgrade 0100-upgrade.sql policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopping enqueue kafka | [2024-01-23 12:00:14,196] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.513183436Z level=info msg="Executing migration" id="add column group_name" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,196] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopping timer grafana | logger=migrator t=2024-01-23T11:59:43.520189212Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.004846ms kafka | [2024-01-23 12:00:14,204] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=d5747120-881d-4c2e-9c54-68eb2a8c3ec9, expireMs=1706011264991] grafana | logger=migrator t=2024-01-23T11:59:43.523483135Z level=info msg="Executing migration" id="add index role.org_id" kafka | [2024-01-23 12:00:14,204] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopping listener grafana | logger=migrator t=2024-01-23T11:59:43.524957258Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.474493ms policy-db-migrator | kafka | [2024-01-23 12:00:14,204] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange stopped grafana | logger=migrator t=2024-01-23T11:59:43.531080391Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-db-migrator | msg kafka | [2024-01-23 12:00:14,204] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpStateChange successful grafana | logger=migrator t=2024-01-23T11:59:43.533104591Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=2.02207ms policy-db-migrator | upgrade to 1100 completed kafka | [2024-01-23 12:00:14,204] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 start publishing next request grafana | logger=migrator t=2024-01-23T11:59:43.537594453Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-db-migrator | kafka | [2024-01-23 12:00:14,212] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:35.042+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting grafana | logger=migrator t=2024-01-23T11:59:43.538804813Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.21046ms policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql kafka | [2024-01-23 12:00:14,212] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting listener grafana | logger=migrator t=2024-01-23T11:59:43.541756569Z level=info msg="Executing migration" id="create team role table" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,213] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.542591571Z level=info msg="Migration successfully executed" id="create team role table" duration=834.781µs policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting timer policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME kafka | [2024-01-23 12:00:14,213] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.548354926Z level=info msg="Executing migration" id="add index team_role.org_id" policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=63a76968-fae6-4e69-9528-57bfc1bb20a8, expireMs=1706011265043] policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,213] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.549495102Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.140276ms policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate starting enqueue policy-db-migrator | kafka | [2024-01-23 12:00:14,219] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.555008105Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate started policy-db-migrator | kafka | [2024-01-23 12:00:14,219] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.55714195Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.132885ms policy-pap | [2024-01-23T12:00:35.043+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-01-23 12:00:14,219] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.560945939Z level=info msg="Executing migration" id="add index team_role.team_id" policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63a76968-fae6-4e69-9528-57bfc1bb20a8","timestampMs":1706011235028,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.562547998Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.60413ms policy-pap | [2024-01-23T12:00:35.054+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-23 12:00:14,219] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-01-23T11:59:43.567925754Z level=info msg="Executing migration" id="create user role table" policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63a76968-fae6-4e69-9528-57bfc1bb20a8","timestampMs":1706011235028,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-23 12:00:14,219] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.568925653Z level=info msg="Migration successfully executed" id="create user role table" duration=999.409µs kafka | [2024-01-23 12:00:14,227] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:35.054+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-23T11:59:43.57673488Z level=info msg="Executing migration" id="add index user_role.org_id" policy-db-migrator | kafka | [2024-01-23 12:00:14,227] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:35.055+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-23T11:59:43.578083916Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.348666ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,227] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | {"source":"pap-c9cd1c7c-2e58-4937-84b6-2c31f25c757e","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63a76968-fae6-4e69-9528-57bfc1bb20a8","timestampMs":1706011235028,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.582501885Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-01-23 12:00:14,227] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:35.055+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-23T11:59:43.583884203Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.379428ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,228] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:35.065+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-23T11:59:43.589903731Z level=info msg="Executing migration" id="add index user_role.user_id" policy-db-migrator | kafka | [2024-01-23 12:00:14,267] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63a76968-fae6-4e69-9528-57bfc1bb20a8","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d7b3019b-93eb-43bb-bab7-dfe71e0e46ae","timestampMs":1706011235053,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.592591034Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=3.01663ms policy-db-migrator | kafka | [2024-01-23 12:00:14,267] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 63a76968-fae6-4e69-9528-57bfc1bb20a8 grafana | logger=migrator t=2024-01-23T11:59:43.597926618Z level=info msg="Executing migration" id="create builtin role table" policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-01-23 12:00:14,267] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-23T11:59:43.599486345Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.559167ms policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,268] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63a76968-fae6-4e69-9528-57bfc1bb20a8","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d7b3019b-93eb-43bb-bab7-dfe71e0e46ae","timestampMs":1706011235053,"name":"apex-dea203ac-ecd5-4158-b932-7658b548b741","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-23T11:59:43.603272662Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-23 12:00:14,268] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,274] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.604715744Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.443992ms policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping enqueue policy-db-migrator | kafka | [2024-01-23 12:00:14,274] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.610241517Z level=info msg="Executing migration" id="add index builtin_role.name" policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping timer policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,274] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.611423406Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.181659ms policy-pap | [2024-01-23T12:00:35.066+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=63a76968-fae6-4e69-9528-57bfc1bb20a8, expireMs=1706011265043] policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-01-23 12:00:14,274] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.617489716Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-pap | [2024-01-23T12:00:35.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopping listener policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,274] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.625830348Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.340822ms policy-pap | [2024-01-23T12:00:35.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate stopped policy-db-migrator | kafka | [2024-01-23 12:00:14,282] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.65860142Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-pap | [2024-01-23T12:00:35.073+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 PdpUpdate successful policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.660526695Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.926036ms kafka | [2024-01-23 12:00:14,283] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:35.073+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-dea203ac-ecd5-4158-b932-7658b548b741 has no more requests policy-db-migrator | > upgrade 0130-statistics_sequence.sql grafana | logger=migrator t=2024-01-23T11:59:43.66528317Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" kafka | [2024-01-23 12:00:14,283] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:42.732+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.666360373Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.076413ms kafka | [2024-01-23 12:00:14,283] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:42.740+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-01-23T11:59:43.671018144Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" kafka | [2024-01-23 12:00:14,283] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:43.125+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.672494377Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.474633ms kafka | [2024-01-23 12:00:14,290] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:43.734+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.683778515Z level=info msg="Executing migration" id="add unique index role.uid" kafka | [2024-01-23 12:00:14,291] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:43.734+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.684918761Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.140726ms kafka | [2024-01-23 12:00:14,291] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:44.312+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-01-23T11:59:43.688244426Z level=info msg="Executing migration" id="create seed assignment table" kafka | [2024-01-23 12:00:14,291] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:44.658+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.68913016Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=887.414µs kafka | [2024-01-23 12:00:14,291] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:44.770+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.694233402Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-01-23 12:00:14,301] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:44.770+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.695468453Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.234631ms kafka | [2024-01-23 12:00:14,302] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-23T12:00:44.771+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-01-23T11:59:43.698780037Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-01-23 12:00:14,302] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:44.788+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-23T12:00:44Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-23T12:00:44Z, user=policyadmin)] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-23T11:59:43.706871987Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.09395ms kafka | [2024-01-23 12:00:14,303] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-23T12:00:45.531+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.743156292Z level=info msg="Executing migration" id="permission kind migration" kafka | [2024-01-23 12:00:14,303] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-23T12:00:45.533+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.753787458Z level=info msg="Migration successfully executed" id="permission kind migration" duration=10.632096ms kafka | [2024-01-23 12:00:14,311] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-23T12:00:45.533+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-01-23 12:00:14,311] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.760870539Z level=info msg="Executing migration" id="permission attribute migration" policy-pap | [2024-01-23T12:00:45.533+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,312] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.773747786Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=12.877127ms policy-pap | [2024-01-23T12:00:45.533+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-01-23 12:00:14,312] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.778187825Z level=info msg="Executing migration" id="permission identifier migration" policy-pap | [2024-01-23T12:00:45.546+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-23T12:00:45Z, user=policyadmin)] policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,312] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.786443234Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.254419ms policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup policy-db-migrator | kafka | [2024-01-23 12:00:14,318] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.78960005Z level=info msg="Executing migration" id="add permission identifier index" policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,318] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.790377318Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=774.638µs policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-db-migrator | DROP TABLE pdpstatistics kafka | [2024-01-23 12:00:14,318] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.796722472Z level=info msg="Executing migration" id="create query_history table v1" policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,318] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.798231497Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.510645ms policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-db-migrator | kafka | [2024-01-23 12:00:14,319] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.802958481Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-pap | [2024-01-23T12:00:45.900+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-db-migrator | kafka | [2024-01-23 12:00:14,334] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.805206002Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.246671ms policy-pap | [2024-01-23T12:00:45.916+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-23T12:00:45Z, user=policyadmin)] policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql kafka | [2024-01-23 12:00:14,334] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.808836282Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-pap | [2024-01-23T12:01:04.879+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=bc5b9e09-f9ff-4d83-b72b-00f5bbd6915c, expireMs=1706011264879] policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,335] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.808933286Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=95.415µs policy-pap | [2024-01-23T12:01:04.992+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=d5747120-881d-4c2e-9c54-68eb2a8c3ec9, expireMs=1706011264991] policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-01-23 12:00:14,335] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.812528174Z level=info msg="Executing migration" id="rbac disabled migrator" policy-pap | [2024-01-23T12:01:06.504+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,335] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.812597678Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=70.844µs policy-pap | [2024-01-23T12:01:06.506+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-db-migrator | kafka | [2024-01-23 12:00:14,341] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.817739212Z level=info msg="Executing migration" id="teams permissions migration" policy-db-migrator | kafka | [2024-01-23 12:00:14,342] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.818582284Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=838.941µs policy-db-migrator | > upgrade 0120-statistics_sequence.sql kafka | [2024-01-23 12:00:14,342] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.824461565Z level=info msg="Executing migration" id="dashboard permissions" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,342] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.825507116Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.047522ms policy-db-migrator | DROP TABLE statistics_sequence kafka | [2024-01-23 12:00:14,342] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:43.82900934Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-db-migrator | -------------- kafka | [2024-01-23 12:00:14,352] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:43.829739606Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=729.977µs policy-db-migrator | grafana | logger=migrator t=2024-01-23T11:59:43.832893542Z level=info msg="Executing migration" id="drop managed folder create actions" kafka | [2024-01-23 12:00:14,353] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-01-23T11:59:43.833156255Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=262.353µs kafka | [2024-01-23 12:00:14,353] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-db-migrator | name version grafana | logger=migrator t=2024-01-23T11:59:43.838616685Z level=info msg="Executing migration" id="alerting notification permissions" kafka | [2024-01-23 12:00:14,353] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policyadmin 1300 grafana | logger=migrator t=2024-01-23T11:59:43.839172862Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=556.677µs kafka | [2024-01-23 12:00:14,353] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | ID script operation from_version to_version tag success atTime grafana | logger=migrator t=2024-01-23T11:59:43.848581288Z level=info msg="Executing migration" id="create query_history_star table v1" kafka | [2024-01-23 12:00:14,364] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 grafana | logger=migrator t=2024-01-23T11:59:43.850023689Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.441451ms kafka | [2024-01-23 12:00:14,365] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 grafana | logger=migrator t=2024-01-23T11:59:43.854711981Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-01-23 12:00:14,365] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 grafana | logger=migrator t=2024-01-23T11:59:43.85589646Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.186188ms kafka | [2024-01-23 12:00:14,365] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 grafana | logger=migrator t=2024-01-23T11:59:43.859156091Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-01-23 12:00:14,365] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 grafana | logger=migrator t=2024-01-23T11:59:43.867290512Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.134121ms kafka | [2024-01-23 12:00:14,377] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:41 grafana | logger=migrator t=2024-01-23T11:59:43.871674799Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" kafka | [2024-01-23 12:00:14,377] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.871780614Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=105.455µs kafka | [2024-01-23 12:00:14,378] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.876057826Z level=info msg="Executing migration" id="create correlation table v1" kafka | [2024-01-23 12:00:14,378] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.877019544Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=961.027µs kafka | [2024-01-23 12:00:14,378] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.882273403Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-01-23 12:00:14,386] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.883986108Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.714015ms kafka | [2024-01-23 12:00:14,386] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.893508459Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-01-23 12:00:14,387] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.894797973Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.290954ms kafka | [2024-01-23 12:00:14,387] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.89817331Z level=info msg="Executing migration" id="add correlation config column" kafka | [2024-01-23 12:00:14,387] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.907759824Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.585774ms kafka | [2024-01-23 12:00:14,395] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.910984494Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" kafka | [2024-01-23 12:00:14,395] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.912559432Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.573657ms kafka | [2024-01-23 12:00:14,395] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.921033881Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" kafka | [2024-01-23 12:00:14,396] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.923452211Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.419899ms kafka | [2024-01-23 12:00:14,396] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.927388295Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" kafka | [2024-01-23 12:00:14,405] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.958065603Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=30.673908ms kafka | [2024-01-23 12:00:14,406] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.962006098Z level=info msg="Executing migration" id="create correlation v2" kafka | [2024-01-23 12:00:14,406] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.962748304Z level=info msg="Migration successfully executed" id="create correlation v2" duration=741.466µs kafka | [2024-01-23 12:00:14,407] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.967007685Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" kafka | [2024-01-23 12:00:14,407] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:42 grafana | logger=migrator t=2024-01-23T11:59:43.968888908Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.877863ms kafka | [2024-01-23 12:00:14,415] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:43.974082155Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-01-23 12:00:14,416] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:43.975578459Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.496894ms kafka | [2024-01-23 12:00:14,416] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:43.98023998Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" kafka | [2024-01-23 12:00:14,416] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:43.981421738Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.181588ms kafka | [2024-01-23 12:00:14,417] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:43.984782194Z level=info msg="Executing migration" id="copy correlation v1 to v2" kafka | [2024-01-23 12:00:14,424] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:43.984992565Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=210.481µs policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,425] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:43.996763347Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,425] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:43.997866592Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.108075ms policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,425] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.002327462Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,425] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.010804041Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.475929ms policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,431] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:44.015603008Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,433] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:44.017074551Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.470792ms policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,433] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.021897689Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,433] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.023731519Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.834491ms policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,433] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.03105411Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-01-23 12:00:14,439] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:44.031824738Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 kafka | [2024-01-23 12:00:14,440] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:44.036168783Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-01-23 12:00:14,440] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.036671548Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-01-23 12:00:14,440] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.078575746Z level=info msg="Executing migration" id="Drop old dashboard public config table" kafka | [2024-01-23 12:00:14,440] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(UZhXoIGVRReKBLH6iRv9pA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.079851599Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.277894ms kafka | [2024-01-23 12:00:14,449] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:44.085295347Z level=info msg="Executing migration" id="recreate dashboard public config v1" kafka | [2024-01-23 12:00:14,449] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:44.086539869Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.244002ms kafka | [2024-01-23 12:00:14,449] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:44.092094363Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" kafka | [2024-01-23 12:00:14,450] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:44.093222498Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.128185ms kafka | [2024-01-23 12:00:14,450] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:44.097388284Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-01-23 12:00:14,459] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:43 grafana | logger=migrator t=2024-01-23T11:59:44.09852153Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.130766ms kafka | [2024-01-23 12:00:14,459] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.101641364Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-01-23 12:00:14,459] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.102729428Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.088584ms kafka | [2024-01-23 12:00:14,460] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.106050802Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-01-23 12:00:14,460] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.107153726Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.103045ms kafka | [2024-01-23 12:00:14,467] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.111367874Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-01-23 12:00:14,467] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.112235917Z level=info msg="Migration successfully executed" id="Drop public config table" duration=865.423µs kafka | [2024-01-23 12:00:14,468] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.115605983Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-01-23 12:00:14,468] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.116660625Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.052762ms kafka | [2024-01-23 12:00:14,468] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.120573608Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-01-23 12:00:14,505] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.121746446Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.172858ms kafka | [2024-01-23 12:00:14,506] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.19558167Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-01-23 12:00:14,506] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.19741213Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.83036ms kafka | [2024-01-23 12:00:14,506] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.202608787Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-01-23 12:00:14,507] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.204110181Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.501984ms kafka | [2024-01-23 12:00:14,516] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.208520398Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" kafka | [2024-01-23 12:00:14,517] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 grafana | logger=migrator t=2024-01-23T11:59:44.240954949Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=32.430331ms policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 kafka | [2024-01-23 12:00:14,517] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.244845791Z level=info msg="Executing migration" id="add annotations_enabled column" policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 kafka | [2024-01-23 12:00:14,517] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.253288548Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.442277ms policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 kafka | [2024-01-23 12:00:14,517] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.25617954Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:44 kafka | [2024-01-23 12:00:14,526] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:44.262429289Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.250309ms policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 kafka | [2024-01-23 12:00:14,527] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:44.266576124Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 kafka | [2024-01-23 12:00:14,527] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.266894839Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=306.396µs policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 kafka | [2024-01-23 12:00:14,527] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.269434285Z level=info msg="Executing migration" id="add share column" policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 kafka | [2024-01-23 12:00:14,527] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.2780537Z level=info msg="Migration successfully executed" id="add share column" duration=8.616826ms policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 kafka | [2024-01-23 12:00:14,534] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:44.282874958Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 kafka | [2024-01-23 12:00:14,534] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:44.283073808Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=198.37µs policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 kafka | [2024-01-23 12:00:14,535] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-01-23 12:00:14,535] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.286517198Z level=info msg="Executing migration" id="create file table" kafka | [2024-01-23 12:00:14,535] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.28717146Z level=info msg="Migration successfully executed" id="create file table" duration=653.822µs kafka | [2024-01-23 12:00:14,542] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.291167997Z level=info msg="Executing migration" id="file table idx: path natural pk" kafka | [2024-01-23 12:00:14,542] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.292292963Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.124746ms kafka | [2024-01-23 12:00:14,542] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.295621377Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" kafka | [2024-01-23 12:00:14,542] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.296749553Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.128235ms kafka | [2024-01-23 12:00:14,542] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.30013907Z level=info msg="Executing migration" id="create file_meta table" kafka | [2024-01-23 12:00:14,550] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.30095919Z level=info msg="Migration successfully executed" id="create file_meta table" duration=819.73µs kafka | [2024-01-23 12:00:14,551] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.305329006Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-01-23 12:00:14,551] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.307297433Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.966307ms kafka | [2024-01-23 12:00:14,551] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:45 grafana | logger=migrator t=2024-01-23T11:59:44.315284517Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-01-23 12:00:14,551] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.315442705Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=163.058µs kafka | [2024-01-23 12:00:14,556] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.325439618Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" kafka | [2024-01-23 12:00:14,557] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.325570375Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=131.507µs kafka | [2024-01-23 12:00:14,557] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.329278578Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-01-23 12:00:14,557] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.330145801Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=898.755µs kafka | [2024-01-23 12:00:14,557] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.334682685Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-01-23 12:00:14,563] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.335039022Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=354.168µs kafka | [2024-01-23 12:00:14,563] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.337881332Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-01-23 12:00:14,564] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.339085662Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.20398ms kafka | [2024-01-23 12:00:14,564] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.34228496Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-01-23 12:00:14,564] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2301241159410800u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.354616168Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=12.331708ms kafka | [2024-01-23 12:00:14,574] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.360009574Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-01-23 12:00:14,574] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.360171232Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=161.798µs kafka | [2024-01-23 12:00:14,574] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 grafana | logger=migrator t=2024-01-23T11:59:44.369618869Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 kafka | [2024-01-23 12:00:14,574] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.371364915Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.745066ms policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:46 kafka | [2024-01-23 12:00:14,574] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.375049567Z level=info msg="Executing migration" id="update group index for alert rules" policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,583] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:44.375637676Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=589.429µs policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,584] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:44.379049674Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,584] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.379256554Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=206.99µs policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,584] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.383670242Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,584] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.384119684Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=449.422µs policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,592] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:44.387733813Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,593] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:44.399282493Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.549119ms policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2301241159410900u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,593] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.40834738Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,593] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.417032649Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.683698ms policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,593] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.420895899Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,598] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-23T11:59:44.422752951Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.856582ms policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,599] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-23T11:59:44.428966428Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,599] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.536781468Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=107.80032ms policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 kafka | [2024-01-23 12:00:14,599] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-23T11:59:44.540220678Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-01-23 12:00:14,599] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 grafana | logger=migrator t=2024-01-23T11:59:44.541427037Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.160937ms kafka | [2024-01-23 12:00:14,605] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 grafana | logger=migrator t=2024-01-23T11:59:44.545267047Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" kafka | [2024-01-23 12:00:14,605] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2301241159411000u 1 2024-01-23 11:59:47 grafana | logger=migrator t=2024-01-23T11:59:44.546387632Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.142066ms kafka | [2024-01-23 12:00:14,605] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2301241159411100u 1 2024-01-23 11:59:47 grafana | logger=migrator t=2024-01-23T11:59:44.552739596Z level=info msg="Executing migration" id="add primary key to seed_assigment" kafka | [2024-01-23 12:00:14,606] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2301241159411200u 1 2024-01-23 11:59:47 grafana | logger=migrator t=2024-01-23T11:59:44.592686147Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=39.920471ms kafka | [2024-01-23 12:00:14,606] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(y4LhsVCjShWp08qTM9318g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2301241159411200u 1 2024-01-23 11:59:48 grafana | logger=migrator t=2024-01-23T11:59:44.628762117Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2301241159411200u 1 2024-01-23 11:59:48 grafana | logger=migrator t=2024-01-23T11:59:44.629192739Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=428.671µs kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2301241159411200u 1 2024-01-23 11:59:48 grafana | logger=migrator t=2024-01-23T11:59:44.633111552Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2301241159411300u 1 2024-01-23 11:59:48 grafana | logger=migrator t=2024-01-23T11:59:44.633976045Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=864.733µs kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2301241159411300u 1 2024-01-23 11:59:48 kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2301241159411300u 1 2024-01-23 11:59:48 grafana | logger=migrator t=2024-01-23T11:59:44.637368722Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | policyadmin: OK @ 1300 grafana | logger=migrator t=2024-01-23T11:59:44.637625745Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=256.623µs kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.642950778Z level=info msg="Executing migration" id="create folder table" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.645323865Z level=info msg="Migration successfully executed" id="create folder table" duration=2.372848ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.651961452Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.65313508Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.173808ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.65799863Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.659319736Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.321035ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.664355374Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-01-23T11:59:44.664382525Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.131µs kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.66813106Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.670000213Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.867432ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.673417391Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.675302794Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.881833ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.679458449Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.680851068Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.391609ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.689708665Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.690939256Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.233571ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.697623476Z level=info msg="Executing migration" id="add unique index anon_device.device_id" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.700087367Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.460851ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.707741495Z level=info msg="Executing migration" id="add index anon_device.updated_at" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.709474401Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.733935ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.714959031Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.71615033Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.191749ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.72385084Z level=info msg="Executing migration" id="add unique index signing_key.key_id" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.725743774Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.897183ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.733040764Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.734543968Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.505865ms kafka | [2024-01-23 12:00:14,613] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.744813275Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.746587572Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=1.785398ms kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.751703145Z level=info msg="Executing migration" id="Add folder_uid for dashboard" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.76276072Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.055156ms kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.767791789Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.768524925Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=737.547µs grafana | logger=migrator t=2024-01-23T11:59:44.7716728Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.772947023Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.273403ms kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.776150681Z level=info msg="Executing migration" id="create sso_setting table" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.777083587Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=935.386µs kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.787036748Z level=info msg="Executing migration" id="copy kvstore migration status to each org" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.78787807Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=842.322µs kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.793225184Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.793802602Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=573.618µs kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-01-23T11:59:44.798397109Z level=info msg="migrations completed" performed=523 skipped=0 duration=5.463601602s kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) grafana | logger=sqlstore t=2024-01-23T11:59:44.807756041Z level=info msg="Created default admin" user=admin kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=sqlstore t=2024-01-23T11:59:44.808056686Z level=info msg="Created default organization" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=secrets t=2024-01-23T11:59:44.81401Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=plugin.store t=2024-01-23T11:59:44.832469861Z level=info msg="Loading plugins..." kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=local.finder t=2024-01-23T11:59:44.870777781Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-01-23T11:59:44.870832664Z level=info msg="Plugins loaded" count=55 duration=38.363754ms grafana | logger=query_data t=2024-01-23T11:59:44.874434971Z level=info msg="Query Service initialization" kafka | [2024-01-23 12:00:14,614] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=live.push_http t=2024-01-23T11:59:44.878640979Z level=info msg="Live Push Gateway initialization" kafka | [2024-01-23 12:00:14,624] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=ngalert.migration t=2024-01-23T11:59:44.885028514Z level=info msg=Starting kafka | [2024-01-23 12:00:14,630] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.migration orgID=1 t=2024-01-23T11:59:44.886031414Z level=info msg="Migrating alerts for organisation" kafka | [2024-01-23 12:00:14,637] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=ngalert.migration orgID=1 t=2024-01-23T11:59:44.886463905Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-01-23 12:00:14,637] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.migration orgID=1 t=2024-01-23T11:59:44.886978841Z level=warn msg="No available receivers" kafka | [2024-01-23 12:00:14,637] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-23T11:59:44.890128006Z level=info msg="Completed legacy migration" kafka | [2024-01-23 12:00:14,637] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=infra.usagestats.collector t=2024-01-23T11:59:44.947028464Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-01-23 12:00:14,637] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=provisioning.datasources t=2024-01-23T11:59:44.948907717Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=provisioning.alerting t=2024-01-23T11:59:44.962221554Z level=info msg="starting to provision alerting" kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=provisioning.alerting t=2024-01-23T11:59:44.962237175Z level=info msg="finished to provision alerting" kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=grafanaStorageLogger t=2024-01-23T11:59:44.962701597Z level=info msg="Storage starting" kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=ngalert.state.manager t=2024-01-23T11:59:44.963542479Z level=info msg="Warming state cache for startup" kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=http.server t=2024-01-23T11:59:44.965235623Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=ngalert.multiorg.alertmanager t=2024-01-23T11:59:44.965418892Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=sqlstore.transactions t=2024-01-23T11:59:44.976425275Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=grafana.update.checker t=2024-01-23T11:59:44.990726761Z level=info msg="Update check succeeded" duration=27.374421ms kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ngalert.state.manager t=2024-01-23T11:59:44.994755789Z level=info msg="State cache has been initialized" states=0 duration=31.2119ms kafka | [2024-01-23 12:00:14,638] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=ngalert.scheduler t=2024-01-23T11:59:44.99477778Z level=info msg="Starting scheduler" tickInterval=10s kafka | [2024-01-23 12:00:14,638] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=ticker t=2024-01-23T11:59:44.994818252Z level=info msg=starting first_tick=2024-01-23T11:59:50Z kafka | [2024-01-23 12:00:14,639] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=plugins.update.checker t=2024-01-23T11:59:45.037990913Z level=info msg="Update check succeeded" duration=75.144669ms kafka | [2024-01-23 12:00:14,639] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=sqlstore.transactions t=2024-01-23T11:59:45.094321363Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-01-23 12:00:14,639] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) grafana | logger=infra.usagestats t=2024-01-23T12:00:59.975838576Z level=info msg="Usage stats are ready to report" kafka | [2024-01-23 12:00:14,640] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,642] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,643] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,643] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 5 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,644] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,644] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,645] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,645] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,645] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,645] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 4 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,646] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,647] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,647] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 5 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,647] INFO [Broker id=1] Finished LeaderAndIsr request in 714ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-01-23 12:00:14,648] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,648] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,649] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,649] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,649] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,649] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,650] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,651] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,652] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=y4LhsVCjShWp08qTM9318g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=UZhXoIGVRReKBLH6iRv9pA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,652] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,653] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,653] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,653] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,665] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,666] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,667] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-23 12:00:14,668] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 24 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,668] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-23 12:00:14,668] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,668] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-23 12:00:14,747] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,751] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 7faaa365-1216-4c85-9c2d-e9bca189fc3d in Empty state. Created a new member id consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,771] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:14,774] INFO [GroupCoordinator 1]: Preparing to rebalance group 7faaa365-1216-4c85-9c2d-e9bca189fc3d in state PreparingRebalance with old generation 0 (__consumer_offsets-46) (reason: Adding new member consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:15,108] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5e219e28-7118-417e-b91d-edf2321c7473 in Empty state. Created a new member id consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:15,113] INFO [GroupCoordinator 1]: Preparing to rebalance group 5e219e28-7118-417e-b91d-edf2321c7473 in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:17,781] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:17,785] INFO [GroupCoordinator 1]: Stabilized group 7faaa365-1216-4c85-9c2d-e9bca189fc3d generation 1 (__consumer_offsets-46) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:17,815] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-05125a59-907a-47c3-93e1-a990571b604b for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:17,817] INFO [GroupCoordinator 1]: Assignment received from leader consumer-7faaa365-1216-4c85-9c2d-e9bca189fc3d-3-c639ff7a-2705-4b68-b804-62b68552537a for group 7faaa365-1216-4c85-9c2d-e9bca189fc3d for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:18,114] INFO [GroupCoordinator 1]: Stabilized group 5e219e28-7118-417e-b91d-edf2321c7473 generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-23 12:00:18,131] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5e219e28-7118-417e-b91d-edf2321c7473-2-d01c9040-7f78-415c-8a67-4b73bfd12a93 for group 5e219e28-7118-417e-b91d-edf2321c7473 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping kafka ... Stopping grafana ... Stopping policy-api ... Stopping prometheus ... Stopping compose_zookeeper_1 ... Stopping mariadb ... Stopping simulator ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping policy-pap ... done Stopping simulator ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing kafka ... Removing grafana ... Removing policy-api ... Removing policy-db-migrator ... Removing prometheus ... Removing compose_zookeeper_1 ... Removing mariadb ... Removing simulator ... Removing simulator ... done Removing policy-apex-pdp ... done Removing kafka ... done Removing compose_zookeeper_1 ... done Removing policy-pap ... done Removing grafana ... done Removing policy-db-migrator ... done Removing policy-api ... done Removing mariadb ... done Removing prometheus ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.T4ASB2z6Jw ]] + rsync -av /tmp/tmp.T4ASB2z6Jw/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 911,149 bytes received 95 bytes 1,822,488.00 bytes/sec total size is 910,607 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2123 killed; [ssh-agent] Stopped. Robot results publisher started... -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3195400356898807782.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12437479620954204043.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9208720287268946047.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dyeB from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17293255453978466800.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config17626998686283198592tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins497474894402880750.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3876169952220041563.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dyeB from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13900155818246817967.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12452643335119819247.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dyeB from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins6823146667807296703.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-dyeB from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-dyeB/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1547 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-14552 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 846 24634 0 6686 30865 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:05:bb:b6 brd ff:ff:ff:ff:ff:ff inet 10.30.106.120/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85891sec preferred_lft 85891sec inet6 fe80::f816:3eff:fe05:bbb6/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:eb:4e:44:b0 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14552) 01/23/24 _x86_64_ (8 CPU) 11:55:14 LINUX RESTART (8 CPU) 11:56:01 tps rtps wtps bread/s bwrtn/s 11:57:01 97.02 17.71 79.30 1021.43 25416.43 11:58:01 120.20 22.88 97.32 2757.14 29077.82 11:59:01 164.02 0.28 163.74 31.86 89267.92 12:00:01 395.80 11.65 384.16 776.37 98886.80 12:01:01 30.03 0.70 29.33 40.93 22584.49 12:02:01 15.98 0.00 15.98 0.00 19299.32 12:03:01 68.42 1.05 67.37 69.06 21836.74 Average: 127.36 7.75 119.61 670.97 43768.39 11:56:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:57:01 30075356 31684164 2863864 8.69 69240 1849444 1446032 4.25 884876 1684892 173068 11:58:01 29507212 31691856 3432008 10.42 89680 2383952 1547808 4.55 963692 2131520 352692 11:59:01 26726292 31668408 6212928 18.86 133720 4974528 1409496 4.15 1012044 4711732 865044 12:00:01 23787924 30330944 9151296 27.78 157552 6478748 7775868 22.88 2488744 6046624 356 12:01:01 23083132 29632912 9856088 29.92 158888 6481664 8763248 25.78 3235272 5997640 284 12:02:01 23064804 29615224 9874416 29.98 159056 6481980 8763364 25.78 3252476 5997292 360 12:03:01 25297312 31668812 7641908 23.20 161484 6319272 1489416 4.38 1225072 5855864 44596 Average: 25934576 30898903 7004644 21.27 132803 4995655 4456462 13.11 1866025 4632223 205200 11:56:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:57:01 lo 1.33 1.33 0.14 0.14 0.00 0.00 0.00 0.00 11:57:01 ens3 73.97 48.54 1005.62 9.04 0.00 0.00 0.00 0.00 11:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:58:01 lo 5.20 5.20 0.49 0.49 0.00 0.00 0.00 0.00 11:58:01 ens3 112.38 82.79 2420.56 10.85 0.00 0.00 0.00 0.00 11:58:01 br-0f0b718c2412 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:59:01 lo 6.73 6.73 0.68 0.68 0.00 0.00 0.00 0.00 11:59:01 ens3 813.88 459.37 19255.92 34.09 0.00 0.00 0.00 0.00 11:59:01 br-0f0b718c2412 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:00:01 vethcf60905 0.10 0.43 0.01 0.02 0.00 0.00 0.00 0.00 12:00:01 lo 2.67 2.67 0.23 0.23 0.00 0.00 0.00 0.00 12:00:01 vethb35b943 54.67 64.76 19.21 16.03 0.00 0.00 0.00 0.00 12:00:01 veth6d7739d 1.83 1.90 0.18 0.19 0.00 0.00 0.00 0.00 12:01:01 vethcf60905 0.48 0.48 0.05 1.37 0.00 0.00 0.00 0.00 12:01:01 lo 5.17 5.17 3.51 3.51 0.00 0.00 0.00 0.00 12:01:01 vethb35b943 51.82 62.66 57.90 15.34 0.00 0.00 0.00 0.00 12:01:01 veth6d7739d 18.20 15.03 2.16 2.24 0.00 0.00 0.00 0.00 12:02:01 vethcf60905 0.58 0.60 0.05 1.52 0.00 0.00 0.00 0.00 12:02:01 lo 5.18 5.18 0.38 0.38 0.00 0.00 0.00 0.00 12:02:01 vethb35b943 1.53 1.72 0.54 0.39 0.00 0.00 0.00 0.00 12:02:01 veth6d7739d 13.93 9.38 1.06 1.34 0.00 0.00 0.00 0.00 12:03:01 lo 4.87 4.87 0.45 0.45 0.00 0.00 0.00 0.00 12:03:01 ens3 1868.07 1096.07 37243.74 158.15 0.00 0.00 0.00 0.00 12:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: lo 4.45 4.45 0.84 0.84 0.00 0.00 0.00 0.00 Average: ens3 214.06 124.89 5209.94 14.53 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14552) 01/23/24 _x86_64_ (8 CPU) 11:55:14 LINUX RESTART (8 CPU) 11:56:01 CPU %user %nice %system %iowait %steal %idle 11:57:01 all 8.42 0.00 0.61 3.06 0.11 87.80 11:57:01 0 3.54 0.00 0.30 0.08 0.03 96.04 11:57:01 1 4.20 0.00 0.37 0.10 0.02 95.31 11:57:01 2 0.70 0.00 0.12 13.95 0.02 85.22 11:57:01 3 13.76 0.00 0.91 1.51 0.68 83.14 11:57:01 4 14.24 0.00 0.88 0.40 0.03 84.44 11:57:01 5 6.18 0.00 0.50 0.65 0.03 92.63 11:57:01 6 9.46 0.00 0.62 0.85 0.05 89.02 11:57:01 7 15.21 0.00 1.15 6.98 0.05 76.61 11:58:01 all 9.28 0.00 1.03 4.32 0.04 85.32 11:58:01 0 6.88 0.00 1.12 0.08 0.03 91.88 11:58:01 1 2.40 0.00 0.45 0.00 0.05 97.10 11:58:01 2 1.42 0.00 0.43 15.33 0.02 82.80 11:58:01 3 11.01 0.00 1.13 0.90 0.05 86.91 11:58:01 4 8.84 0.00 1.35 2.47 0.03 87.30 11:58:01 5 13.83 0.00 1.07 0.55 0.03 84.52 11:58:01 6 11.61 0.00 1.18 2.24 0.03 84.94 11:58:01 7 18.29 0.00 1.50 12.96 0.05 67.19 11:59:01 all 9.71 0.00 4.11 9.18 0.07 76.93 11:59:01 0 10.39 0.00 5.16 25.65 0.07 58.73 11:59:01 1 9.60 0.00 4.24 9.96 0.07 76.12 11:59:01 2 10.96 0.00 5.06 18.64 0.07 65.28 11:59:01 3 10.42 0.00 3.72 0.00 0.07 85.79 11:59:01 4 8.53 0.00 4.34 0.24 0.07 86.82 11:59:01 5 9.45 0.00 4.16 2.37 0.05 83.98 11:59:01 6 10.70 0.00 3.37 4.82 0.07 81.04 11:59:01 7 7.64 0.00 2.86 11.83 0.05 77.63 12:00:01 all 20.10 0.00 4.41 7.92 0.08 67.49 12:00:01 0 14.40 0.00 3.92 1.03 0.07 80.58 12:00:01 1 25.62 0.00 5.72 34.49 0.10 34.07 12:00:01 2 21.13 0.00 4.16 1.35 0.07 73.30 12:00:01 3 18.59 0.00 3.77 1.93 0.07 75.64 12:00:01 4 23.87 0.00 5.67 2.34 0.08 68.04 12:00:01 5 21.31 0.00 4.34 3.10 0.08 71.16 12:00:01 6 16.52 0.00 3.84 1.43 0.08 78.13 12:00:01 7 19.43 0.00 3.88 17.81 0.08 58.79 12:01:01 all 16.95 0.00 1.55 1.01 0.06 80.44 12:01:01 0 18.77 0.00 1.99 0.02 0.07 79.15 12:01:01 1 17.85 0.00 1.64 3.61 0.05 76.85 12:01:01 2 20.77 0.00 1.61 0.07 0.05 77.50 12:01:01 3 17.93 0.00 1.47 3.28 0.07 77.25 12:01:01 4 16.73 0.00 1.48 0.03 0.07 81.69 12:01:01 5 13.38 0.00 1.20 0.07 0.07 85.28 12:01:01 6 16.01 0.00 1.42 0.36 0.07 82.13 12:01:01 7 14.12 0.00 1.54 0.69 0.05 83.61 12:02:01 all 1.16 0.00 0.15 0.99 0.04 97.66 12:02:01 0 0.72 0.00 0.20 0.00 0.05 99.03 12:02:01 1 0.55 0.00 0.20 4.77 0.03 94.44 12:02:01 2 1.27 0.00 0.07 0.05 0.03 98.58 12:02:01 3 1.05 0.00 0.10 0.53 0.05 98.26 12:02:01 4 1.09 0.00 0.13 2.50 0.03 96.24 12:02:01 5 1.52 0.00 0.18 0.00 0.07 98.23 12:02:01 6 2.05 0.00 0.20 0.00 0.03 97.72 12:02:01 7 1.08 0.00 0.15 0.02 0.03 98.72 12:03:01 all 4.05 0.00 0.70 1.79 0.04 93.43 12:03:01 0 1.67 0.00 0.69 0.42 0.03 97.19 12:03:01 1 1.59 0.00 0.60 0.68 0.03 97.09 12:03:01 2 1.82 0.00 0.40 0.13 0.03 97.61 12:03:01 3 2.13 0.00 0.47 0.64 0.05 96.72 12:03:01 4 0.85 0.00 0.75 11.07 0.03 87.30 12:03:01 5 2.52 0.00 0.70 0.02 0.03 96.73 12:03:01 6 19.69 0.00 1.24 0.89 0.07 78.12 12:03:01 7 2.10 0.00 0.70 0.50 0.03 96.66 Average: all 9.94 0.00 1.79 4.03 0.06 84.18 Average: 0 8.04 0.00 1.90 3.86 0.05 86.15 Average: 1 8.80 0.00 1.88 7.61 0.05 81.66 Average: 2 8.27 0.00 1.68 7.06 0.04 82.94 Average: 3 10.71 0.00 1.65 1.26 0.15 86.23 Average: 4 10.57 0.00 2.08 2.73 0.05 84.57 Average: 5 9.73 0.00 1.73 0.96 0.05 87.53 Average: 6 12.29 0.00 1.69 1.51 0.06 84.46 Average: 7 11.12 0.00 1.68 7.23 0.05 79.93