Started by upstream project "policy-docker-master-merge-java" build number 346 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137652 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-21829 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-JsAABIdVHeJf/agent.2079 SSH_AGENT_PID=2081 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9891480896115039705.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9891480896115039705.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision c5936fb131831992ac8da40fb56599dfb0ae1b5e (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f c5936fb131831992ac8da40fb56599dfb0ae1b5e # timeout=30 Commit message: "Disable drools pdp test in CSIT until drools is fixed" > git rev-list --no-walk cebb4172163dc04b43be7e34d9a4b374370492f8 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8462579616346211994.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-3z0W lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-3z0W/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.3 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.80 botocore==1.34.80 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.3 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.35 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.0.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.0 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.34.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.25.1 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins7988746917834826800.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins12946586964289018512.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.f2xXiAH3bV ++ echo ROBOT_VENV=/tmp/tmp.f2xXiAH3bV +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.f2xXiAH3bV ++ source /tmp/tmp.f2xXiAH3bV/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.f2xXiAH3bV +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.f2xXiAH3bV/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.f2xXiAH3bV) ' '!=' x ']' +++ PS1='(tmp.f2xXiAH3bV) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.f2xXiAH3bV/src/onap ++ rm -rf /tmp/tmp.f2xXiAH3bV/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.f2xXiAH3bV/bin/activate + '[' -z /tmp/tmp.f2xXiAH3bV/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.f2xXiAH3bV/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.f2xXiAH3bV ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.f2xXiAH3bV/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.f2xXiAH3bV) ' ++ '[' 'x(tmp.f2xXiAH3bV) ' '!=' x ']' ++ PS1='(tmp.f2xXiAH3bV) (tmp.f2xXiAH3bV) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.6QreRUgV9i + cd /tmp/tmp.6QreRUgV9i + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:dec2018ae55885fed717f25c289b8c9cff0bf5fbb9e619fb49b6161ac493c016 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:753bbb971071480d6630d3aa0d55345188c02f39456664f67c1ea443593638d0 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1)... 3.1.1: Pulling from onap/policy-models-simulator Digest: sha256:a22fada6cc93fcd88ed863d58b0b428eaaf13d3b02579e649141f6bdb5fac181 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:60a680475999b7df727a4e4ae6dd0391d3a6f4fffbde0f8c3faea985c8443c48 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1)... 3.1.1: Pulling from onap/policy-api Digest: sha256:73823c235d74d2500efd44b527f0e010b15469552561a2052fab717e6646a352 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1)... 3.1.1: Pulling from onap/policy-pap Digest: sha256:2271905a2e80443fc6baa2f2141445192fe325d5c557920b1f4880541288e18d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:3f9880e060c3465862043c69561fa1d43ab448175d1adf3efd53d751d3b9947d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating prometheus ... Creating compose_zookeeper_1 ... Creating simulator ... Creating mariadb ... Creating simulator ... done Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds kafka Up 12 seconds grafana Up 14 seconds policy-api Up 16 seconds compose_zookeeper_1 Up 13 seconds mariadb Up 18 seconds simulator Up 19 seconds prometheus Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds kafka Up 17 seconds grafana Up 19 seconds policy-api Up 21 seconds compose_zookeeper_1 Up 18 seconds mariadb Up 23 seconds simulator Up 24 seconds prometheus Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds kafka Up 22 seconds grafana Up 24 seconds policy-api Up 26 seconds compose_zookeeper_1 Up 23 seconds mariadb Up 28 seconds simulator Up 29 seconds prometheus Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds kafka Up 27 seconds grafana Up 29 seconds policy-api Up 31 seconds compose_zookeeper_1 Up 28 seconds mariadb Up 33 seconds simulator Up 34 seconds prometheus Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds grafana Up 34 seconds policy-api Up 36 seconds compose_zookeeper_1 Up 33 seconds mariadb Up 38 seconds simulator Up 39 seconds prometheus Up 35 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 14:12:25 up 5 min, 0 users, load average: 3.27, 1.57, 0.65 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.6 us, 2.4 sy, 0.0 ni, 80.5 id, 5.4 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' total used free shared buff/cache available Mem: 31G 2.5G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds grafana Up 34 seconds policy-api Up 36 seconds compose_zookeeper_1 Up 33 seconds mariadb Up 38 seconds simulator Up 39 seconds prometheus Up 35 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS d2c3e0b17a61 policy-apex-pdp 193.67% 193.9MiB / 31.41GiB 0.60% 7.07kB / 6.85kB 0B / 0B 48 85e3881d71a2 policy-pap 9.57% 518.7MiB / 31.41GiB 1.61% 28kB / 29.8kB 0B / 153MB 61 ef04caad11d3 kafka 55.49% 390.8MiB / 31.41GiB 1.21% 69.7kB / 73.2kB 0B / 500kB 83 a62c613b33fc grafana 0.02% 53.69MiB / 31.41GiB 0.17% 18.4kB / 3.18kB 0B / 24.9MB 18 acf71fa6ff00 policy-api 0.09% 496.5MiB / 31.41GiB 1.54% 1e+03kB / 710kB 0B / 0B 55 e6905731a0ff compose_zookeeper_1 0.09% 99.88MiB / 31.41GiB 0.31% 56.3kB / 49.5kB 0B / 385kB 60 5e76f0512c7d mariadb 0.01% 102.1MiB / 31.41GiB 0.32% 995kB / 1.19MB 11MB / 63.7MB 41 6d002923de7f simulator 0.08% 122.1MiB / 31.41GiB 0.38% 1.81kB / 0B 168kB / 0B 76 0e5aa80e1ba3 prometheus 0.00% 18.21MiB / 31.41GiB 0.06% 1.28kB / 158B 0B / 0B 13 + echo + cd /tmp/tmp.6QreRUgV9i + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.6QreRUgV9i/output.xml Log: /tmp/tmp.6QreRUgV9i/log.html Report: /tmp/tmp.6QreRUgV9i/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 14:14:15 up 6 min, 0 users, load average: 0.73, 1.22, 0.62 Tasks: 200 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 9.8 us, 1.9 sy, 0.0 ni, 83.9 id, 4.3 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS d2c3e0b17a61 policy-apex-pdp 0.48% 188.9MiB / 31.41GiB 0.59% 55.6kB / 89.6kB 0B / 0B 52 85e3881d71a2 policy-pap 1.03% 536.8MiB / 31.41GiB 1.67% 2.33MB / 806kB 0B / 153MB 65 ef04caad11d3 kafka 1.06% 382.4MiB / 31.41GiB 1.19% 237kB / 214kB 0B / 606kB 85 a62c613b33fc grafana 0.06% 61MiB / 31.41GiB 0.19% 19.5kB / 4.45kB 0B / 24.9MB 18 acf71fa6ff00 policy-api 0.10% 563.8MiB / 31.41GiB 1.75% 2.49MB / 1.26MB 0B / 0B 58 e6905731a0ff compose_zookeeper_1 0.09% 100.2MiB / 31.41GiB 0.31% 59.2kB / 51.1kB 0B / 385kB 60 5e76f0512c7d mariadb 0.02% 103.4MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 64.1MB 28 6d002923de7f simulator 0.08% 122.2MiB / 31.41GiB 0.38% 2.12kB / 0B 168kB / 0B 78 0e5aa80e1ba3 prometheus 0.00% 25.39MiB / 31.41GiB 0.08% 181kB / 10.9kB 0B / 0B 13 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, compose_zookeeper_1, mariadb, simulator, prometheus grafana | logger=settings t=2024-04-09T14:11:50.841763984Z level=info msg="Starting Grafana" version=10.4.1 commit=d94d597d847c05085542c29dfad6b3f469cc77e1 branch=v10.4.x compiled=2024-04-09T14:11:50Z grafana | logger=settings t=2024-04-09T14:11:50.842967186Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-04-09T14:11:50.843132929Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-04-09T14:11:50.84319117Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-04-09T14:11:50.843253311Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-04-09T14:11:50.843300622Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-09T14:11:50.843365933Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-09T14:11:50.843448295Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-04-09T14:11:50.843506056Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-04-09T14:11:50.843619668Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-04-09T14:11:50.843694779Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-04-09T14:11:50.843775381Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-04-09T14:11:50.843872793Z level=info msg=Target target=[all] grafana | logger=settings t=2024-04-09T14:11:50.843931884Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-04-09T14:11:50.844016455Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-04-09T14:11:50.844062226Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-04-09T14:11:50.844146588Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-04-09T14:11:50.844178148Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-04-09T14:11:50.84428847Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-04-09T14:11:50.844781909Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-04-09T14:11:50.844875971Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-04-09T14:11:50.845683716Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-04-09T14:11:50.846806516Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-04-09T14:11:50.847742014Z level=info msg="Migration successfully executed" id="create migration_log table" duration=934.568µs grafana | logger=migrator t=2024-04-09T14:11:50.889773833Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-04-09T14:11:50.891019576Z level=info msg="Migration successfully executed" id="create user table" duration=1.251573ms grafana | logger=migrator t=2024-04-09T14:11:50.894056932Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-04-09T14:11:50.89504398Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=986.988µs grafana | logger=migrator t=2024-04-09T14:11:50.909824141Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-04-09T14:11:50.910796318Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=976.378µs grafana | logger=migrator t=2024-04-09T14:11:50.913118141Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-04-09T14:11:50.913727322Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=609.421µs grafana | logger=migrator t=2024-04-09T14:11:50.916126846Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-04-09T14:11:50.916760648Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=634.492µs grafana | logger=migrator t=2024-04-09T14:11:50.921473434Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-04-09T14:11:50.925808833Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.335459ms grafana | logger=migrator t=2024-04-09T14:11:50.929106464Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-04-09T14:11:50.930063842Z level=info msg="Migration successfully executed" id="create user table v2" duration=956.938µs grafana | logger=migrator t=2024-04-09T14:11:50.932837092Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-04-09T14:11:50.933682588Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=845.506µs grafana | logger=migrator t=2024-04-09T14:11:50.938432015Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-04-09T14:11:50.9392856Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=853.385µs grafana | logger=migrator t=2024-04-09T14:11:50.942090222Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-04-09T14:11:50.942652822Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=559.43µs grafana | logger=migrator t=2024-04-09T14:11:50.945570865Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-04-09T14:11:50.946260448Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=688.523µs grafana | logger=migrator t=2024-04-09T14:11:50.949142231Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-04-09T14:11:50.950449645Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.306924ms grafana | logger=migrator t=2024-04-09T14:11:50.956521736Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-04-09T14:11:50.956617028Z level=info msg="Migration successfully executed" id="Update user table charset" duration=95.572µs grafana | logger=migrator t=2024-04-09T14:11:50.959288257Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-04-09T14:11:50.960483049Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.194842ms grafana | logger=migrator t=2024-04-09T14:11:50.963256229Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-04-09T14:11:50.963620786Z level=info msg="Migration successfully executed" id="Add missing user data" duration=363.687µs grafana | logger=migrator t=2024-04-09T14:11:50.966474599Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-04-09T14:11:50.967743902Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.251802ms grafana | logger=migrator t=2024-04-09T14:11:51.143345125Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-04-09T14:11:51.145015536Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.670531ms grafana | logger=migrator t=2024-04-09T14:11:51.148584231Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-04-09T14:11:51.15069257Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.107768ms grafana | logger=migrator t=2024-04-09T14:11:51.153816777Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-04-09T14:11:51.161807103Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.989346ms grafana | logger=migrator t=2024-04-09T14:11:51.166805204Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-04-09T14:11:51.168086848Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.281244ms grafana | logger=migrator t=2024-04-09T14:11:51.171096133Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-04-09T14:11:51.171574552Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=478.519µs grafana | logger=migrator t=2024-04-09T14:11:51.175772309Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-04-09T14:11:51.177115203Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.342835ms grafana | logger=migrator t=2024-04-09T14:11:51.18129513Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-04-09T14:11:51.181757118Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=462.288µs grafana | logger=migrator t=2024-04-09T14:11:51.188091064Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-04-09T14:11:51.18954694Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.454176ms grafana | logger=migrator t=2024-04-09T14:11:51.193797998Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-04-09T14:11:51.195201174Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.403176ms grafana | logger=migrator t=2024-04-09T14:11:51.301997187Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-04-09T14:11:51.303555286Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.561359ms grafana | logger=migrator t=2024-04-09T14:11:51.30922753Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-04-09T14:11:51.310722587Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.494087ms grafana | logger=migrator t=2024-04-09T14:11:51.314419205Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-04-09T14:11:51.315397923Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=976.048µs grafana | logger=migrator t=2024-04-09T14:11:51.318978178Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-04-09T14:11:51.31907725Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=98.892µs grafana | logger=migrator t=2024-04-09T14:11:51.324849945Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-04-09T14:11:51.325732102Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=879.776µs grafana | logger=migrator t=2024-04-09T14:11:51.328415771Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-04-09T14:11:51.329360318Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=944.788µs grafana | logger=migrator t=2024-04-09T14:11:51.332362553Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-04-09T14:11:51.333202108Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=839.435µs grafana | logger=migrator t=2024-04-09T14:11:51.3377074Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-04-09T14:11:51.338593307Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=885.727µs grafana | logger=migrator t=2024-04-09T14:11:51.351058265Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-09T14:11:51.3562655Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.205815ms grafana | logger=migrator t=2024-04-09T14:11:51.361680139Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-04-09T14:11:51.362729748Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.048709ms grafana | logger=migrator t=2024-04-09T14:11:51.379288991Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-04-09T14:11:51.380932061Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.64243ms grafana | logger=migrator t=2024-04-09T14:11:51.384810092Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-04-09T14:11:51.385675998Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=865.376µs grafana | logger=migrator t=2024-04-09T14:11:51.390051188Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-04-09T14:11:51.390958935Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=904.276µs grafana | logger=migrator t=2024-04-09T14:11:51.394043361Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-04-09T14:11:51.394918787Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=875.106µs grafana | logger=migrator t=2024-04-09T14:11:51.400185623Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-04-09T14:11:51.400672592Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=487.169µs grafana | logger=migrator t=2024-04-09T14:11:51.403081086Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-04-09T14:11:51.404059344Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=978.218µs grafana | logger=migrator t=2024-04-09T14:11:51.407573679Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-04-09 14:11:55,370] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,377] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,377] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,377] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,377] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,379] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-04-09 14:11:55,379] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-04-09 14:11:55,379] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-04-09 14:11:55,379] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-04-09 14:11:55,380] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-04-09 14:11:55,381] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,381] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,381] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,381] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,382] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-04-09 14:11:55,382] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-04-09 14:11:55,393] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-04-09 14:11:55,396] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-04-09 14:11:55,396] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-04-09 14:11:55,398] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-04-09 14:11:55,407] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,407] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,408] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,408] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,408] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,408] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,408] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,408] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,408] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,408] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:host.name=e6905731a0ff (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,411] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-04-09 14:11:55,412] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,412] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,413] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-04-09 14:11:55,413] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | SLF4J: Class path contains multiple SLF4J bindings. kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] kafka | [2024-04-09 14:11:56,856] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:host.name=ef04caad11d3 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-04-09T14:11:51.408389903Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=815.895µs grafana | logger=migrator t=2024-04-09T14:11:51.414265421Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-04-09T14:11:51.415049745Z level=info msg="Migration successfully executed" id="create star table" duration=783.814µs grafana | logger=migrator t=2024-04-09T14:11:51.418300895Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-04-09T14:11:51.419206891Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=904.416µs grafana | logger=migrator t=2024-04-09T14:11:51.422744766Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-04-09T14:11:51.424233173Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.488057ms grafana | logger=migrator t=2024-04-09T14:11:51.427407011Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-04-09T14:11:51.428300678Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=896.757µs grafana | logger=migrator t=2024-04-09T14:11:51.43388942Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-04-09T14:11:51.434763986Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=874.346µs grafana | logger=migrator t=2024-04-09T14:11:51.437419024Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-04-09T14:11:51.438321921Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=902.917µs grafana | logger=migrator t=2024-04-09T14:11:51.442705451Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-04-09T14:11:51.443615948Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=910.597µs grafana | logger=migrator t=2024-04-09T14:11:51.4464715Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-04-09T14:11:51.447366946Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=895.736µs grafana | logger=migrator t=2024-04-09T14:11:51.452088653Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-04-09T14:11:51.452276456Z level=info msg="Migration successfully executed" id="Update org table charset" duration=188.853µs grafana | logger=migrator t=2024-04-09T14:11:51.45524345Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-04-09T14:11:51.455465104Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=223.214µs grafana | logger=migrator t=2024-04-09T14:11:51.45902226Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-04-09T14:11:51.459522619Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=498.409µs grafana | logger=migrator t=2024-04-09T14:11:51.463801107Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-04-09T14:11:51.464644842Z level=info msg="Migration successfully executed" id="create dashboard table" duration=842.855µs grafana | logger=migrator t=2024-04-09T14:11:51.469318518Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-04-09T14:11:51.470276725Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=955.187µs grafana | logger=migrator t=2024-04-09T14:11:51.473198679Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-04-09T14:11:51.474245468Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.045959ms grafana | logger=migrator t=2024-04-09T14:11:51.477001798Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-04-09T14:11:51.477756622Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=754.324µs grafana | logger=migrator t=2024-04-09T14:11:51.482697353Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-04-09T14:11:51.483583579Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=886.187µs grafana | logger=migrator t=2024-04-09T14:11:51.486333719Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-04-09T14:11:51.487232186Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=897.017µs grafana | logger=migrator t=2024-04-09T14:11:51.4907332Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-04-09T14:11:51.498043553Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.310403ms grafana | logger=migrator t=2024-04-09T14:11:51.503659006Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-04-09T14:11:51.504592593Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=933.887µs grafana | logger=migrator t=2024-04-09T14:11:51.50769081Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-04-09T14:11:51.508528245Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=834.545µs grafana | logger=migrator t=2024-04-09T14:11:51.511872636Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-04-09T14:11:51.512833174Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=960.068µs grafana | logger=migrator t=2024-04-09T14:11:51.516557332Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-04-09T14:11:51.51699044Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=431.988µs grafana | logger=migrator t=2024-04-09T14:11:51.521052254Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-04-09T14:11:51.521941631Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=888.817µs grafana | logger=migrator t=2024-04-09T14:11:51.525950034Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-04-09T14:11:51.526183368Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=233.604µs grafana | logger=migrator t=2024-04-09T14:11:51.531681629Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-04-09T14:11:51.534703604Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.020985ms grafana | logger=migrator t=2024-04-09T14:11:51.538076295Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-04-09T14:11:51.53998463Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.909405ms kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,860] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:56,863] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-09 14:11:56,867] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-09 14:11:56,874] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-09 14:11:56,887] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-09 14:11:56,887] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-09 14:11:56,894] INFO Socket connection established, initiating session, client: /172.17.0.9:37666, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-09 14:11:56,930] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000445790000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-09 14:11:57,050] INFO Session: 0x100000445790000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:57,050] INFO EventThread shut down for session: 0x100000445790000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-04-09 14:11:57,724] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-04-09 14:11:58,051] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-04-09 14:11:58,131] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-04-09 14:11:58,133] INFO starting (kafka.server.KafkaServer) kafka | [2024-04-09 14:11:58,134] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-04-09 14:11:58,148] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-09 14:11:58,152] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,152] INFO Client environment:host.name=ef04caad11d3 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,157] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) kafka | [2024-04-09 14:11:58,164] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-04-09 14:11:58,171] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-04-09T14:11:51.543430894Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.547397526Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=3.966553ms grafana | logger=migrator t=2024-04-09T14:11:51.551365189Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.552192554Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=825.475µs grafana | logger=migrator t=2024-04-09T14:11:51.557017442Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.559038849Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.020557ms grafana | logger=migrator t=2024-04-09T14:11:51.562266328Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.563246916Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=977.488µs grafana | logger=migrator t=2024-04-09T14:11:51.569961549Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-04-09T14:11:51.571233862Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.274093ms grafana | logger=migrator t=2024-04-09T14:11:51.574522062Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-04-09T14:11:51.574748466Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=226.214µs grafana | logger=migrator t=2024-04-09T14:11:51.579805299Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-04-09T14:11:51.580013633Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=207.074µs grafana | logger=migrator t=2024-04-09T14:11:51.585087876Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.587180984Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.093199ms grafana | logger=migrator t=2024-04-09T14:11:51.592548582Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.594684931Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.135949ms grafana | logger=migrator t=2024-04-09T14:11:51.60174251Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.603856749Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.113859ms grafana | logger=migrator t=2024-04-09T14:11:51.609483822Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.612973436Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.483843ms grafana | logger=migrator t=2024-04-09T14:11:51.620201528Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.620528294Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=327.386µs grafana | logger=migrator t=2024-04-09T14:11:51.628051701Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-04-09T14:11:51.628895387Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=843.386µs grafana | logger=migrator t=2024-04-09T14:11:51.637367672Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-04-09T14:11:51.638743697Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.379455ms grafana | logger=migrator t=2024-04-09T14:11:51.643779709Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-04-09T14:11:51.64383012Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=52.961µs grafana | logger=migrator t=2024-04-09T14:11:51.651399368Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-04-09T14:11:51.652489238Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.08812ms grafana | logger=migrator t=2024-04-09T14:11:51.659786002Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-04-09T14:11:51.660986414Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.200682ms grafana | logger=migrator t=2024-04-09T14:11:51.667475203Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-09T14:11:51.674093993Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.61957ms grafana | logger=migrator t=2024-04-09T14:11:51.723431046Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-04-09T14:11:51.724466385Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.038979ms grafana | logger=migrator t=2024-04-09T14:11:51.746747472Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-04-09T14:11:51.747925634Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.179662ms grafana | logger=migrator t=2024-04-09T14:11:51.763532469Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-04-09T14:11:51.765587317Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=2.052888ms grafana | logger=migrator t=2024-04-09T14:11:51.773203877Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-04-09T14:11:51.773503042Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=298.776µs grafana | logger=migrator t=2024-04-09T14:11:51.778820139Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-04-09T14:11:51.779666875Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=844.646µs grafana | logger=migrator t=2024-04-09T14:11:51.78705182Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-04-09T14:11:51.789196019Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.143589ms grafana | logger=migrator t=2024-04-09T14:11:51.803970909Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-04-09T14:11:51.806020917Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=2.050638ms grafana | logger=migrator t=2024-04-09T14:11:51.813693047Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-04-09T14:11:51.81388498Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=192.183µs grafana | logger=migrator t=2024-04-09T14:11:51.818982724Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-04-09T14:11:51.819508823Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=525.129µs grafana | logger=migrator t=2024-04-09T14:11:51.82258368Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-04-09T14:11:51.824036496Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.453766ms grafana | logger=migrator t=2024-04-09T14:11:51.829346703Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-04-09T14:11:51.833144303Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.79348ms grafana | logger=migrator t=2024-04-09T14:11:51.840846034Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-04-09T14:11:51.841775211Z level=info msg="Migration successfully executed" id="create data_source table" duration=933.097µs grafana | logger=migrator t=2024-04-09T14:11:51.847059097Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-04-09T14:11:51.848983352Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.924585ms grafana | logger=migrator t=2024-04-09T14:11:51.856229645Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-04-09T14:11:51.859085577Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=2.836422ms grafana | logger=migrator t=2024-04-09T14:11:51.867597163Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-04-09T14:11:51.868954498Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.357685ms grafana | logger=migrator t=2024-04-09T14:11:51.879582122Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-04-09T14:11:51.880512059Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=929.967µs grafana | logger=migrator t=2024-04-09T14:11:51.886655642Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-04-09T14:11:51.89473645Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.081537ms grafana | logger=migrator t=2024-04-09T14:11:51.902844998Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-04-09T14:11:51.903777935Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=932.987µs grafana | logger=migrator t=2024-04-09T14:11:51.913495133Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-04-09T14:11:51.91498939Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.541718ms grafana | logger=migrator t=2024-04-09T14:11:51.920924578Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-04-09T14:11:51.921786214Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=861.496µs grafana | logger=migrator t=2024-04-09T14:11:51.929166429Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-04-09T14:11:51.930360421Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.205402ms grafana | logger=migrator t=2024-04-09T14:11:51.936549064Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-04-09T14:11:51.939973947Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.424043ms grafana | logger=migrator t=2024-04-09T14:11:51.943919109Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-04-09T14:11:51.94619181Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.272241ms grafana | logger=migrator t=2024-04-09T14:11:51.952694099Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-04-09T14:11:51.95272367Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.211µs grafana | logger=migrator t=2024-04-09T14:11:51.95983859Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-04-09T14:11:51.960162586Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=324.626µs grafana | logger=migrator t=2024-04-09T14:11:51.969516047Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-04-09T14:11:51.973164864Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.648287ms grafana | logger=migrator t=2024-04-09T14:11:51.976969294Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-04-09T14:11:51.977508703Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=539.429µs grafana | logger=migrator t=2024-04-09T14:11:51.981955425Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-04-09T14:11:51.982183459Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=228.604µs grafana | logger=migrator t=2024-04-09T14:11:51.988625357Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-04-09T14:11:51.992288954Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.663537ms grafana | logger=migrator t=2024-04-09T14:11:52.000943622Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-04-09T14:11:52.001186757Z level=info msg="Migration successfully executed" id="Update uid value" duration=242.474µs grafana | logger=migrator t=2024-04-09T14:11:52.004693831Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-04-09T14:11:52.005532086Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=838.115µs grafana | logger=migrator t=2024-04-09T14:11:52.009514539Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-04-09T14:11:52.010390841Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=876.013µs grafana | logger=migrator t=2024-04-09T14:11:52.016564031Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-04-09T14:11:52.017962561Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.39774ms grafana | logger=migrator t=2024-04-09T14:11:52.025237347Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-04-09T14:11:52.026432594Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.198057ms grafana | logger=migrator t=2024-04-09T14:11:52.031243714Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-04-09T14:11:52.032391861Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.149467ms grafana | logger=migrator t=2024-04-09T14:11:52.037749378Z level=info msg="Executing migration" id="add index api_key.account_id_name" zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-04-09 14:11:55,416] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,416] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,416] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-04-09 14:11:55,416] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-04-09 14:11:55,416] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,445] INFO Logging initialized @544ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-04-09 14:11:55,519] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-04-09 14:11:55,519] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-04-09 14:11:55,539] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-04-09 14:11:55,570] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-04-09 14:11:55,570] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-04-09 14:11:55,572] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-04-09 14:11:55,575] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-04-09 14:11:55,583] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-04-09 14:11:55,603] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-04-09 14:11:55,603] INFO Started @702ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-04-09 14:11:55,603] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-04-09 14:11:55,609] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-04-09 14:11:55,609] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-04-09 14:11:55,611] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-04-09 14:11:55,612] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-04-09 14:11:55,625] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-04-09 14:11:55,625] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-04-09 14:11:55,626] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-04-09 14:11:55,626] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-04-09 14:11:55,630] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-04-09 14:11:55,630] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-04-09 14:11:55,633] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-04-09 14:11:55,634] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-04-09 14:11:55,634] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-04-09 14:11:55,642] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-04-09 14:11:55,643] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-04-09 14:11:55,656] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-04-09 14:11:55,657] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-04-09 14:11:56,911] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) kafka | [2024-04-09 14:11:58,173] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-09 14:11:58,177] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-09 14:11:58,183] INFO Socket connection established, initiating session, client: /172.17.0.9:37668, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-09 14:11:58,191] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000445790001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-04-09 14:11:58,200] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-04-09 14:11:58,508] INFO Cluster ID = TupwFhGQQjGmvCIddVeH4w (kafka.server.KafkaServer) kafka | [2024-04-09 14:11:58,511] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-04-09 14:11:58,555] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 mariadb | 2024-04-09 14:11:46+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-09 14:11:46+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-04-09 14:11:46+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-04-09 14:11:46+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-04-09 14:11:47 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-09 14:11:47 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-09 14:11:47 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-04-09 14:11:48+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-04-09 14:11:48+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-04-09 14:11:48+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-04-09 14:11:48 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 97 ... mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 grafana | logger=migrator t=2024-04-09T14:11:52.039161518Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.41597ms grafana | logger=migrator t=2024-04-09T14:11:52.042408455Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-04-09T14:11:52.043067665Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=661.27µs grafana | logger=migrator t=2024-04-09T14:11:52.051625149Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-04-09T14:11:52.053156911Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.535352ms grafana | logger=migrator t=2024-04-09T14:11:52.058030381Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-04-09T14:11:52.058792982Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=745.921µs grafana | logger=migrator t=2024-04-09T14:11:52.063409059Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-04-09T14:11:52.074408709Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.000079ms grafana | logger=migrator t=2024-04-09T14:11:52.079814557Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-04-09T14:11:52.080783111Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=968.634µs grafana | logger=migrator t=2024-04-09T14:11:52.088020965Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-04-09T14:11:52.089108821Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.087626ms grafana | logger=migrator t=2024-04-09T14:11:52.127990514Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-04-09T14:11:52.130251917Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=2.263373ms grafana | logger=migrator t=2024-04-09T14:11:52.140079359Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-04-09T14:11:52.141173665Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.088936ms grafana | logger=migrator t=2024-04-09T14:11:52.147659649Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-04-09T14:11:52.148981818Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=1.32216ms grafana | logger=migrator t=2024-04-09T14:11:52.154323125Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-04-09T14:11:52.155133557Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=810.462µs grafana | logger=migrator t=2024-04-09T14:11:52.165255063Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-04-09T14:11:52.165285874Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=33.641µs grafana | logger=migrator t=2024-04-09T14:11:52.175209827Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-04-09T14:11:52.180688977Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=5.48121ms grafana | logger=migrator t=2024-04-09T14:11:52.191380201Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-04-09T14:11:52.19471207Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.331188ms grafana | logger=migrator t=2024-04-09T14:11:52.203139962Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-04-09T14:11:52.203663499Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=521.437µs grafana | logger=migrator t=2024-04-09T14:11:52.20717595Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-04-09T14:11:52.213449061Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=6.272161ms grafana | logger=migrator t=2024-04-09T14:11:52.21960851Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-04-09T14:11:52.222052675Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.443775ms grafana | logger=migrator t=2024-04-09T14:11:52.22785689Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-04-09T14:11:52.228903665Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.046736ms grafana | logger=migrator t=2024-04-09T14:11:52.232628268Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-04-09T14:11:52.233476951Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=849.913µs grafana | logger=migrator t=2024-04-09T14:11:52.241431786Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-04-09T14:11:52.244158095Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=2.725979ms grafana | logger=migrator t=2024-04-09T14:11:52.248466138Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-04-09T14:11:52.249739756Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.274668ms grafana | logger=migrator t=2024-04-09T14:11:52.25620871Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-04-09T14:11:52.25756831Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.358659ms grafana | logger=migrator t=2024-04-09T14:11:52.265132769Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-04-09T14:11:52.265956951Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=823.992µs grafana | logger=migrator t=2024-04-09T14:11:52.276151779Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-04-09 14:11:48 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-04-09 14:11:48 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-09 14:11:48 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-04-09 14:11:48 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-04-09 14:11:48 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-04-09 14:11:48 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-04-09 14:11:48 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-04-09 14:11:48 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-04-09 14:11:49+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-04-09 14:11:51+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-04-09 14:11:51+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-04-09 14:11:51+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-04-09 14:11:51+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" grafana | logger=migrator t=2024-04-09T14:11:52.276485963Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=192.123µs grafana | logger=migrator t=2024-04-09T14:11:52.283911091Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-04-09T14:11:52.283982812Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=74.492µs grafana | logger=migrator t=2024-04-09T14:11:52.289118196Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | grafana | logger=migrator t=2024-04-09T14:11:52.296138638Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=7.021912ms grafana | logger=migrator t=2024-04-09T14:11:52.301317042Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-04-09T14:11:52.304882494Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.565692ms grafana | logger=migrator t=2024-04-09T14:11:52.310449415Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-04-09T14:11:52.310511886Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=63.451µs grafana | logger=migrator t=2024-04-09T14:11:52.316223598Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-04-09T14:11:52.317459386Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.233708ms grafana | logger=migrator t=2024-04-09T14:11:52.324448777Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-04-09T14:11:52.32535443Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=907.283µs grafana | logger=migrator t=2024-04-09T14:11:52.333010351Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-04-09T14:11:52.333038902Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=29.241µs grafana | logger=migrator t=2024-04-09T14:11:52.337642798Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-04-09T14:11:52.339342733Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.699145ms grafana | logger=migrator t=2024-04-09T14:11:52.345872247Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-04-09T14:11:52.346812991Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=939.734µs grafana | logger=migrator t=2024-04-09T14:11:52.352733157Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-04-09T14:11:52.357384824Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.649117ms grafana | logger=migrator t=2024-04-09T14:11:52.361413062Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-04-09T14:11:52.361440553Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=27.701µs grafana | logger=migrator t=2024-04-09T14:11:52.364568328Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-04-09T14:11:52.365875587Z level=info msg="Migration successfully executed" id="create session table" duration=1.306869ms grafana | logger=migrator t=2024-04-09T14:11:52.371331946Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-04-09T14:11:52.371468878Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=138.512µs grafana | logger=migrator t=2024-04-09T14:11:52.374287739Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-04-09T14:11:52.374417531Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=129.252µs grafana | logger=migrator t=2024-04-09T14:11:52.37714161Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-04-09T14:11:52.378342257Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.199607ms grafana | logger=migrator t=2024-04-09T14:11:52.381609335Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-04-09T14:11:52.383065206Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.455351ms grafana | logger=migrator t=2024-04-09T14:11:52.389105613Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-04-09T14:11:52.389194544Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=90.831µs grafana | logger=migrator t=2024-04-09T14:11:52.392147797Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-04-09T14:11:52.392234358Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=87.701µs grafana | logger=migrator t=2024-04-09T14:11:52.394713374Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-04-09T14:11:52.399580335Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.861101ms grafana | logger=migrator t=2024-04-09T14:11:52.403892217Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-04-09T14:11:52.406949031Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.056344ms grafana | logger=migrator t=2024-04-09T14:11:52.446555145Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-04-09T14:11:52.446848729Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=292.935µs grafana | logger=migrator t=2024-04-09T14:11:52.45035143Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-04-09T14:11:52.450531962Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=179.442µs grafana | logger=migrator t=2024-04-09T14:11:52.454326587Z level=info msg="Executing migration" id="create preferences table v3" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-04-09 14:11:52+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Buffer pool(s) dump completed at 240409 14:11:52 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Shutdown completed; log sequence number 328914; transaction id 298 mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-04-09 14:11:52+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-04-09 14:11:52+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-04-09 14:11:52 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-04-09 14:11:52 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: log sequence number 328914; transaction id 299 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-04-09 14:11:52 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-04-09 14:11:52 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-04-09 14:11:52 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-04-09 14:11:52 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-04-09 14:11:52 0 [Note] Server socket created on IP: '::'. mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-04-09 14:11:53 0 [Note] InnoDB: Buffer pool(s) load completed at 240409 14:11:52 mariadb | 2024-04-09 14:11:53 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-04-09 14:11:53 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-04-09 14:11:53 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-04-09 14:11:54 64 [Warning] Aborted connection 64 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) grafana | logger=migrator t=2024-04-09T14:11:52.455746238Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.419681ms grafana | logger=migrator t=2024-04-09T14:11:52.46075394Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-04-09T14:11:52.460790751Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=38.031µs grafana | logger=migrator t=2024-04-09T14:11:52.46353474Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-04-09T14:11:52.468429761Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.894841ms grafana | logger=migrator t=2024-04-09T14:11:52.539005432Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-04-09T14:11:52.539235766Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=233.544µs grafana | logger=migrator t=2024-04-09T14:11:52.542115508Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-04-09T14:11:52.546991868Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.87454ms grafana | logger=migrator t=2024-04-09T14:11:52.551396392Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-04-09T14:11:52.554771721Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.375929ms policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:52.557755224Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-04-09T14:11:52.557842315Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=87.611µs grafana | logger=migrator t=2024-04-09T14:11:52.563405106Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-04-09T14:11:52.564699854Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.293878ms grafana | logger=migrator t=2024-04-09T14:11:52.568872185Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-04-09T14:11:52.569826479Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=953.864µs grafana | logger=migrator t=2024-04-09T14:11:52.577644062Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-04-09T14:11:52.578978131Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.333039ms grafana | logger=migrator t=2024-04-09T14:11:52.587506504Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-04-09T14:11:52.588981556Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.478992ms grafana | logger=migrator t=2024-04-09T14:11:52.59821724Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-04-09T14:11:52.59961397Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.39707ms grafana | logger=migrator t=2024-04-09T14:11:52.607121328Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-04-09T14:11:52.608144053Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.023375ms grafana | logger=migrator t=2024-04-09T14:11:52.615023383Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-04-09T14:11:52.616061808Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.036845ms grafana | logger=migrator t=2024-04-09T14:11:52.622973148Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-04-09T14:11:52.624443369Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.472301ms grafana | logger=migrator t=2024-04-09T14:11:52.6321292Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-04-09T14:11:52.633133335Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.003505ms grafana | logger=migrator t=2024-04-09T14:11:52.641164091Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-04-09T14:11:52.652023018Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.858647ms grafana | logger=migrator t=2024-04-09T14:11:52.699599077Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-04-09T14:11:52.700671312Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.075695ms grafana | logger=migrator t=2024-04-09T14:11:52.705014545Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-04-09T14:11:52.705905868Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=888.663µs grafana | logger=migrator t=2024-04-09T14:11:52.710668247Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-04-09T14:11:52.710975381Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=307.384µs grafana | logger=migrator t=2024-04-09T14:11:52.714689035Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-04-09T14:11:52.715566578Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=878.243µs grafana | logger=migrator t=2024-04-09T14:11:52.719026308Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-04-09T14:11:52.71988884Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=862.212µs grafana | logger=migrator t=2024-04-09T14:11:52.724858222Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-04-09T14:11:52.728575976Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.716874ms grafana | logger=migrator t=2024-04-09T14:11:52.731949385Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-04-09T14:11:52.735653768Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.702463ms grafana | logger=migrator t=2024-04-09T14:11:52.739938411Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-04-09T14:11:52.742931624Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.993003ms grafana | logger=migrator t=2024-04-09T14:11:52.746739369Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-04-09T14:11:52.753244613Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=6.502454ms grafana | logger=migrator t=2024-04-09T14:11:52.756204316Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-04-09T14:11:52.756849106Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=644.379µs grafana | logger=migrator t=2024-04-09T14:11:52.759349002Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-04-09T14:11:52.759381202Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=32.64µs policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:52.762551828Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-04-09T14:11:52.762570698Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=19.59µs grafana | logger=migrator t=2024-04-09T14:11:52.769963625Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-04-09T14:11:52.770806557Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=844.852µs grafana | logger=migrator t=2024-04-09T14:11:52.775029619Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-04-09T14:11:52.776060574Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.031635ms grafana | logger=migrator t=2024-04-09T14:11:52.780455697Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-04-09T14:11:52.781228458Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=773.811µs policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql grafana | logger=migrator t=2024-04-09T14:11:52.785078104Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-04-09T14:11:52.785957447Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=879.573µs grafana | logger=migrator t=2024-04-09T14:11:52.789820563Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-04-09T14:11:52.790916738Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.096116ms grafana | logger=migrator t=2024-04-09T14:11:52.794625122Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-04-09T14:11:52.798370136Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.744464ms grafana | logger=migrator t=2024-04-09T14:11:52.805192775Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-04-09T14:11:52.81104557Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=5.852125ms grafana | logger=migrator t=2024-04-09T14:11:52.847913282Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-04-09T14:11:52.848372509Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=466.437µs grafana | logger=migrator t=2024-04-09T14:11:52.852111273Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-04-09T14:11:52.855392471Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=3.281138ms grafana | logger=migrator t=2024-04-09T14:11:52.862004946Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-04-09T14:11:52.863510138Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.604423ms grafana | logger=migrator t=2024-04-09T14:11:52.869122999Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-04-09T14:11:52.873056486Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.937057ms grafana | logger=migrator t=2024-04-09T14:11:52.877319298Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-04-09T14:11:52.877388849Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=70.171µs grafana | logger=migrator t=2024-04-09T14:11:52.882450432Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-04-09T14:11:52.883093002Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=642.52µs grafana | logger=migrator t=2024-04-09T14:11:52.885975233Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-04-09T14:11:52.887664728Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.689645ms grafana | logger=migrator t=2024-04-09T14:11:52.892096372Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-04-09T14:11:52.892270474Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=174.492µs grafana | logger=migrator t=2024-04-09T14:11:52.898474374Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-04-09T14:11:52.899905005Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.430211ms grafana | logger=migrator t=2024-04-09T14:11:52.902952999Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-04-09T14:11:52.90443635Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.483261ms grafana | logger=migrator t=2024-04-09T14:11:52.90786061Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-04-09T14:11:52.90927637Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.41662ms grafana | logger=migrator t=2024-04-09T14:11:52.914425785Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-04-09T14:11:52.915304328Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=878.683µs grafana | logger=migrator t=2024-04-09T14:11:52.918586295Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-04-09T14:11:52.91959432Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.007575ms grafana | logger=migrator t=2024-04-09T14:11:52.954962742Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-04-09T14:11:52.956603425Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.640313ms grafana | logger=migrator t=2024-04-09T14:11:52.962233147Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-04-09T14:11:52.962259367Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.06µs grafana | logger=migrator t=2024-04-09T14:11:52.965422483Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-04-09T14:11:52.971535472Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.111238ms grafana | logger=migrator t=2024-04-09T14:11:52.975231285Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-04-09T14:11:52.975795113Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=565.618µs grafana | logger=migrator t=2024-04-09T14:11:52.981675588Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-04-09T14:11:52.985465333Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.786675ms grafana | logger=migrator t=2024-04-09T14:11:52.990286863Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-04-09T14:11:52.991357098Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.069395ms grafana | logger=migrator t=2024-04-09T14:11:52.995043822Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-04-09T14:11:52.995961985Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=918.103µs grafana | logger=migrator t=2024-04-09T14:11:53.00183317Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-04-09T14:11:53.002682382Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=849.462µs grafana | logger=migrator t=2024-04-09T14:11:53.006288284Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.3:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.9:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-04-09T14:12:25.816+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-04-09T14:12:25.969+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-5bf355d1-b191-4690-8ff2-dd6842394381-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 5bf355d1-b191-4690-8ff2-dd6842394381 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] grafana | logger=migrator t=2024-04-09T14:11:53.017637965Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.346971ms grafana | logger=migrator t=2024-04-09T14:11:53.020517308Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-04-09T14:11:53.021021228Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=503.95µs grafana | logger=migrator t=2024-04-09T14:11:53.024864158Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-04-09T14:11:53.02553384Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=669.472µs grafana | logger=migrator t=2024-04-09T14:11:53.027961915Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-04-09T14:11:53.028408973Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=446.758µs grafana | logger=migrator t=2024-04-09T14:11:53.031263856Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-04-09T14:11:53.032081051Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=817.985µs grafana | logger=migrator t=2024-04-09T14:11:53.037014261Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-04-09T14:11:53.037213955Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=197.424µs grafana | logger=migrator t=2024-04-09T14:11:53.040690249Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.044904676Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.213857ms grafana | logger=migrator t=2024-04-09T14:11:53.047463153Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.051524478Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.058874ms grafana | logger=migrator t=2024-04-09T14:11:53.055330077Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.056296275Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=965.888µs grafana | logger=migrator t=2024-04-09T14:11:53.059170668Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.060121845Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=951.077µs grafana | logger=migrator t=2024-04-09T14:11:53.063044459Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-04-09T14:11:53.063303274Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=258.785µs grafana | logger=migrator t=2024-04-09T14:11:53.067043652Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-04-09T14:11:53.071183798Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.139846ms grafana | logger=migrator t=2024-04-09T14:11:53.074473899Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-04-09T14:11:53.0756069Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.136271ms grafana | logger=migrator t=2024-04-09T14:11:53.078769698Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-04-09T14:11:53.078974121Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=215.924µs grafana | logger=migrator t=2024-04-09T14:11:53.08273362Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-04-09T14:11:53.083135548Z level=info msg="Migration successfully executed" id="Move region to single row" duration=401.518µs grafana | logger=migrator t=2024-04-09T14:11:53.085927819Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.086808395Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=880.616µs grafana | logger=migrator t=2024-04-09T14:11:53.089611517Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.090468442Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=857.005µs grafana | logger=migrator t=2024-04-09T14:11:53.095678638Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.096611545Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=930.897µs grafana | logger=migrator t=2024-04-09T14:11:53.099687172Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.100576658Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=888.906µs grafana | logger=migrator t=2024-04-09T14:11:53.104284336Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.105100911Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=816.685µs grafana | logger=migrator t=2024-04-09T14:11:53.107913783Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-04-09T14:11:53.108788419Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=874.276µs grafana | logger=migrator t=2024-04-09T14:11:53.112279153Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-04-09T14:11:53.112358834Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=78.551µs policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-04-09 14:11:58,582] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-04-09 14:11:58,583] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-04-09 14:11:58,584] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-04-09 14:11:58,586] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-04-09 14:11:58,612] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-04-09 14:11:58,616] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-04-09 14:11:58,625] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) kafka | [2024-04-09 14:11:58,627] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- kafka | [2024-04-09 14:11:58,628] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-04-09 14:11:58,637] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-04-09 14:11:58,681] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-04-09 14:11:58,708] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-04-09 14:11:58,720] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-04-09 14:11:58,743] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-04-09 14:11:59,047] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-04-09 14:11:59,064] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-04-09 14:11:59,064] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-04-09 14:11:59,070] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-04-09 14:11:59,074] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-04-09 14:11:59,094] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-09 14:11:59,096] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-09 14:11:59,099] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-09 14:11:59,099] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-09 14:11:59,100] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-09 14:11:59,112] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-04-09 14:11:59,112] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-04-09 14:11:59,133] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-04-09 14:11:59,157] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1712671919148,1712671919148,1,0,0,72057612383354881,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-04-09 14:11:59,158] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-04-09 14:11:59,284] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-04-09 14:11:59,290] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-09 14:11:59,297] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-09 14:11:59,298] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-04-09 14:11:59,301] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-04-09 14:11:59,312] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,315] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:11:59,318] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,320] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:11:59,322] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-04-09 14:11:59,352] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-04-09 14:11:59,355] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-04-09 14:11:59,355] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-04-09 14:11:59,355] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-04-09 14:11:59,356] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,364] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,369] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,373] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,389] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,389] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.3:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.8) policy-api | policy-api | [2024-04-09T14:12:01.980+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-04-09T14:12:01.982+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-04-09T14:12:03.667+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-04-09T14:12:03.761+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 84 ms. Found 6 JPA repository interfaces. policy-api | [2024-04-09T14:12:04.184+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-04-09T14:12:04.184+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-04-09T14:12:04.805+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-04-09T14:12:04.815+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-04-09T14:12:04.817+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-04-09T14:12:04.817+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-api | [2024-04-09T14:12:04.903+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-04-09T14:12:04.903+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2859 ms policy-api | [2024-04-09T14:12:05.325+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-04-09T14:12:05.395+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-04-09T14:12:05.398+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-04-09T14:12:05.442+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-04-09T14:12:05.800+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-04-09T14:12:05.819+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-04-09T14:12:05.914+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 policy-api | [2024-04-09T14:12:05.916+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-04-09T14:12:07.772+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-04-09T14:12:07.776+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-04-09T14:12:08.764+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-04-09T14:12:26.112+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-04-09T14:12:26.112+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-04-09T14:12:26.112+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671946111 policy-apex-pdp | [2024-04-09T14:12:26.114+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-1, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-04-09T14:12:26.125+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-04-09T14:12:26.126+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-04-09T14:12:26.129+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5bf355d1-b191-4690-8ff2-dd6842394381, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-04-09T14:12:26.154+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 5bf355d1-b191-4690-8ff2-dd6842394381 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-04-09T14:12:26.161+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-04-09T14:12:26.161+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-04-09T14:12:26.161+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671946161 policy-apex-pdp | [2024-04-09T14:12:26.161+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-04-09T14:12:26.162+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=be2ac700-46f7-4847-9bf9-d74c80869d4f, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-04-09T14:12:26.172+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-04-09 14:11:46,035 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-04-09 14:11:46,092 INFO org.onap.policy.models.simulators starting simulator | 2024-04-09 14:11:46,093 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-04-09 14:11:46,271 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-04-09 14:11:46,272 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-04-09 14:11:46,375 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-04-09 14:11:46,387 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-09 14:11:46,389 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-09 14:11:46,395 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-04-09 14:11:46,449 INFO Session workerName=node0 simulator | 2024-04-09 14:11:46,976 INFO Using GSON for REST calls simulator | 2024-04-09 14:11:47,069 INFO Started o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE} simulator | 2024-04-09 14:11:47,080 INFO Started A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-04-09 14:11:47,091 INFO Started Server@2a2c13a8{STARTING}[11.0.20,sto=0] @1535ms simulator | 2024-04-09 14:11:47,091 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4298 ms. simulator | 2024-04-09 14:11:47,100 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-04-09 14:11:47,103 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-04-09 14:11:47,107 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-09 14:11:47,107 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-09 14:11:47,108 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-04-09 14:11:47,117 INFO Session workerName=node0 simulator | 2024-04-09 14:11:47,171 INFO Using GSON for REST calls simulator | 2024-04-09 14:11:47,182 INFO Started o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE} simulator | 2024-04-09 14:11:47,184 INFO Started SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-04-09 14:11:47,184 INFO Started Server@62452cc9{STARTING}[11.0.20,sto=0] @1628ms policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-04-09T14:12:26.180+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-04-09T14:12:26.192+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-04-09T14:12:26.192+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-04-09T14:12:26.192+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671946192 policy-apex-pdp | [2024-04-09T14:12:26.193+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=be2ac700-46f7-4847-9bf9-d74c80869d4f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-04-09T14:12:26.193+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-04-09T14:12:26.193+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-04-09T14:12:26.195+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-04-09T14:12:26.195+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5bf355d1-b191-4690-8ff2-dd6842394381, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5bf355d1-b191-4690-8ff2-dd6842394381, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-04-09T14:12:26.197+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-04-09T14:12:26.211+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-04-09T14:12:26.214+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a613696f-9b67-4851-908a-282ce03d5805","timestampMs":1712671946198,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-04-09T14:12:26.339+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-04-09T14:12:26.339+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-04-09T14:12:26.340+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-04-09T14:12:26.340+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-04-09T14:12:26.349+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-04-09T14:12:26.349+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-04-09T14:12:26.349+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. kafka | [2024-04-09 14:11:59,396] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,406] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-04-09 14:11:59,416] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,416] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) simulator | 2024-04-09 14:11:47,184 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. simulator | 2024-04-09 14:11:47,185 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-04-09 14:11:47,188 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-04-09 14:11:47,188 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-09 14:11:47,190 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-09 14:11:47,190 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-04-09 14:11:47,197 INFO Session workerName=node0 simulator | 2024-04-09 14:11:47,252 INFO Using GSON for REST calls simulator | 2024-04-09 14:11:47,264 INFO Started o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE} simulator | 2024-04-09 14:11:47,266 INFO Started SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-04-09 14:11:47,266 INFO Started Server@488eb7f2{STARTING}[11.0.20,sto=0] @1710ms simulator | 2024-04-09 14:11:47,266 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. simulator | 2024-04-09 14:11:47,267 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-04-09 14:11:47,269 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-04-09 14:11:47,269 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-09 14:11:47,277 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-04-09 14:11:47,279 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-04-09 14:11:47,282 INFO Session workerName=node0 simulator | 2024-04-09 14:11:47,322 INFO Using GSON for REST calls simulator | 2024-04-09 14:11:47,330 INFO Started o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE} simulator | 2024-04-09 14:11:47,331 INFO Started VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-04-09 14:11:47,332 INFO Started Server@6035b93b{STARTING}[11.0.20,sto=0] @1776ms simulator | 2024-04-09 14:11:47,332 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4938 ms. simulator | 2024-04-09 14:11:47,333 INFO org.onap.policy.models.simulators started policy-api | [2024-04-09T14:12:09.594+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-04-09T14:12:10.815+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-04-09T14:12:11.036+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@5c1348c6, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4f3eddc0, org.springframework.security.web.context.SecurityContextHolderFilter@69cf9acb, org.springframework.security.web.header.HeaderWriterFilter@62c4ad40, org.springframework.security.web.authentication.logout.LogoutFilter@dcaa0e8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3341ba8e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5f160f9c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@234a08ea, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@729f8c5d, org.springframework.security.web.access.ExceptionTranslationFilter@4567dcbc, org.springframework.security.web.access.intercept.AuthorizationFilter@543d242e] policy-api | [2024-04-09T14:12:11.840+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-04-09T14:12:11.965+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-04-09T14:12:11.995+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-04-09T14:12:12.014+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.763 seconds (process running for 11.424) policy-api | [2024-04-09T14:12:28.402+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-04-09T14:12:28.402+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-04-09T14:12:28.403+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-04-09T14:12:28.673+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-apex-pdp | [2024-04-09T14:12:26.350+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-04-09T14:12:26.491+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Cluster ID: TupwFhGQQjGmvCIddVeH4w policy-apex-pdp | [2024-04-09T14:12:26.491+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TupwFhGQQjGmvCIddVeH4w policy-apex-pdp | [2024-04-09T14:12:26.492+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-04-09T14:12:26.492+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-04-09T14:12:26.499+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] (Re-)joining group policy-apex-pdp | [2024-04-09T14:12:26.532+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Request joining group due to: need to re-join with the given member-id: consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f policy-apex-pdp | [2024-04-09T14:12:26.532+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-04-09T14:12:26.532+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] (Re-)joining group policy-apex-pdp | [2024-04-09T14:12:26.966+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-04-09T14:12:26.966+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-04-09T14:12:29.535+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f', protocol='range'} policy-apex-pdp | [2024-04-09T14:12:29.543+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Finished assignment for group at generation 1: {consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-04-09T14:12:29.550+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f', protocol='range'} policy-apex-pdp | [2024-04-09T14:12:29.550+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-04-09T14:12:29.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-04-09T14:12:29.558+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-04-09T14:12:29.567+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-04-09T14:12:46.197+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f71a83e0-8991-48b5-bf16-0f80efc2e25f","timestampMs":1712671966197,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-04-09T14:12:46.220+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f71a83e0-8991-48b5-bf16-0f80efc2e25f","timestampMs":1712671966197,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-04-09T14:12:46.223+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-04-09T14:12:46.371+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f35c2eaa-9447-4409-bc81-28e3583921e3","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.381+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-04-09T14:12:46.382+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"58bba064-d794-42f0-bfa3-6b19bdabadb3","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.383+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f35c2eaa-9447-4409-bc81-28e3583921e3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"20e7f5d3-bd9e-4c55-ba54-348ec7aba681","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.391+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"58bba064-d794-42f0-bfa3-6b19bdabadb3","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.391+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-04-09T14:12:46.391+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-04-09T14:11:53.116259596Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-04-09T14:11:53.117110942Z level=info msg="Migration successfully executed" id="create test_data table" duration=851.316µs grafana | logger=migrator t=2024-04-09T14:11:53.122040072Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-04-09T14:11:53.122922448Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=882.036µs grafana | logger=migrator t=2024-04-09T14:11:53.125893623Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-04-09T14:11:53.12681224Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=918.507µs grafana | logger=migrator t=2024-04-09T14:11:53.129777384Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-04-09T14:11:53.130675731Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=897.856µs grafana | logger=migrator t=2024-04-09T14:11:53.135111932Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-04-09T14:11:53.135303076Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=190.943µs grafana | logger=migrator t=2024-04-09T14:11:53.13829338Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-04-09T14:11:53.138697178Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=402.978µs grafana | logger=migrator t=2024-04-09T14:11:53.142469257Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-04-09T14:11:53.142555339Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=86.832µs grafana | logger=migrator t=2024-04-09T14:11:53.146576822Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-04-09T14:11:53.147408888Z level=info msg="Migration successfully executed" id="create team table" duration=833.026µs grafana | logger=migrator t=2024-04-09T14:11:53.150327711Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-04-09T14:11:53.151574224Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.245273ms grafana | logger=migrator t=2024-04-09T14:11:53.154508438Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-04-09T14:11:53.155560737Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.051639ms grafana | logger=migrator t=2024-04-09T14:11:53.159239385Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-04-09T14:11:53.164039293Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.797558ms grafana | logger=migrator t=2024-04-09T14:11:53.167208051Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-04-09T14:11:53.167423945Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=216.414µs grafana | logger=migrator t=2024-04-09T14:11:53.171269786Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-04-09T14:11:53.172287144Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.017308ms grafana | logger=migrator t=2024-04-09T14:11:53.175668187Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-04-09T14:11:53.176465091Z level=info msg="Migration successfully executed" id="create team member table" duration=796.945µs grafana | logger=migrator t=2024-04-09T14:11:53.180281911Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-04-09T14:11:53.181272189Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=990.218µs grafana | logger=migrator t=2024-04-09T14:11:53.184090791Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-04-09T14:11:53.185088969Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=997.938µs grafana | logger=migrator t=2024-04-09T14:11:53.187954962Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-04-09T14:11:53.188990941Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.033349ms grafana | logger=migrator t=2024-04-09T14:11:53.192879783Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-04-09T14:11:53.197554628Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.674766ms grafana | logger=migrator t=2024-04-09T14:11:53.200563884Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-04-09T14:11:53.205187998Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.623674ms grafana | logger=migrator t=2024-04-09T14:11:53.231995953Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-04-09T14:11:53.237890931Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.891288ms grafana | logger=migrator t=2024-04-09T14:11:53.241907695Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-04-09T14:11:53.242918124Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.006349ms grafana | logger=migrator t=2024-04-09T14:11:53.245834508Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-04-09T14:11:53.246837626Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.003499ms grafana | logger=migrator t=2024-04-09T14:11:53.250737508Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-04-09T14:11:53.251832888Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.09469ms grafana | logger=migrator t=2024-04-09T14:11:53.256491474Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-04-09T14:11:53.257479262Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=987.188µs grafana | logger=migrator t=2024-04-09T14:11:53.260514828Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-04-09T14:11:53.261627279Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.111711ms grafana | logger=migrator t=2024-04-09T14:11:53.266169043Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-04-09T14:11:53.267616989Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.447156ms grafana | logger=migrator t=2024-04-09T14:11:53.270884889Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-04-09T14:11:53.271838417Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=954.208µs grafana | logger=migrator t=2024-04-09T14:11:53.275384352Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-04-09T14:11:53.276398951Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.011889ms grafana | logger=migrator t=2024-04-09T14:11:53.280328084Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-04-09T14:11:53.280875984Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=543.74µs grafana | logger=migrator t=2024-04-09T14:11:53.2833945Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f35c2eaa-9447-4409-bc81-28e3583921e3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"20e7f5d3-bd9e-4c55-ba54-348ec7aba681","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.391+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-04-09T14:12:46.412+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.414+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d74c0d11-396a-45ee-aca8-c28faceef757","timestampMs":1712671966414,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.422+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d74c0d11-396a-45ee-aca8-c28faceef757","timestampMs":1712671966414,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.423+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-04-09T14:12:46.464+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","timestampMs":1712671966430,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.466+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b2ae2d05-b707-414a-b0f7-ae1aec005c8b","timestampMs":1712671966465,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.479+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b2ae2d05-b707-414a-b0f7-ae1aec005c8b","timestampMs":1712671966465,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-04-09T14:12:46.480+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-04-09T14:12:56.154+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.5 - policyadmin [09/Apr/2024:14:12:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.1" policy-apex-pdp | [2024-04-09T14:13:56.080+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.5 - policyadmin [09/Apr/2024:14:13:56 +0000] "GET /metrics HTTP/1.1" 200 10654 "-" "Prometheus/2.51.1" kafka | [2024-04-09 14:11:59,416] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,417] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,417] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,420] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,421] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,421] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,422] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-04-09 14:11:59,423] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-04-09 14:11:59,424] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,429] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-04-09 14:11:59,435] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-04-09 14:11:59,439] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-04-09 14:11:59,439] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-04-09 14:11:59,441] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-04-09 14:11:59,444] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-04-09 14:11:59,444] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-04-09 14:11:59,444] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-04-09 14:11:59,445] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-04-09 14:11:59,445] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-04-09 14:11:59,447] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-04-09 14:11:59,452] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,454] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) kafka | [2024-04-09 14:11:59,461] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.9:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) kafka | [2024-04-09 14:11:59,466] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-04-09 14:11:59,466] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) kafka | [2024-04-09 14:11:59,468] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) kafka | [2024-04-09 14:11:59,466] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-04-09 14:11:59,468] INFO Kafka startTimeMs: 1712671919454 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-04-09 14:11:59,469] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,469] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,470] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,471] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,471] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-04-09 14:11:59,472] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,483] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-04-09 14:11:59,572] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-04-09 14:11:59,641] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-09 14:11:59,654] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-04-09 14:11:59,677] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-04-09 14:12:04,484] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-04-09 14:12:04,484] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-04-09 14:12:24,800] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-04-09 14:12:24,803] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-04-09 14:12:24,805] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-04-09 14:12:24,807] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-04-09 14:12:24,832] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(ITmYpZ6rSK-iF5o_1J2T3Q),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(JIxyITR5QGSmI5P2pGX22A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-04-09 14:12:24,834] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.283684385Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=287.545µs grafana | logger=migrator t=2024-04-09T14:11:53.287550777Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-04-09T14:11:53.288513555Z level=info msg="Migration successfully executed" id="create tag table" duration=963.907µs grafana | logger=migrator t=2024-04-09T14:11:53.291371297Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-04-09T14:11:53.292335125Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=963.218µs grafana | logger=migrator t=2024-04-09T14:11:53.295167017Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-04-09T14:11:53.295911911Z level=info msg="Migration successfully executed" id="create login attempt table" duration=744.394µs grafana | logger=migrator t=2024-04-09T14:11:53.299060029Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2024-04-09T14:11:53.299969376Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=908.047µs grafana | logger=migrator t=2024-04-09T14:11:53.303678484Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-04-09T14:11:53.30456272Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=883.886µs grafana | logger=migrator t=2024-04-09T14:11:53.307398043Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-04-09T14:11:53.322627933Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.228181ms grafana | logger=migrator t=2024-04-09T14:11:53.325567188Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-04-09T14:11:53.3262129Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=646.382µs grafana | logger=migrator t=2024-04-09T14:11:53.329662463Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-04-09T14:11:53.330353196Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=690.113µs grafana | logger=migrator t=2024-04-09T14:11:53.333118027Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-04-09T14:11:53.333584375Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=464.698µs grafana | logger=migrator t=2024-04-09T14:11:53.366974861Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-04-09T14:11:53.368047081Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.07518ms grafana | logger=migrator t=2024-04-09T14:11:53.372563924Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-04-09T14:11:53.373782227Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.217313ms grafana | logger=migrator t=2024-04-09T14:11:53.376895874Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-04-09T14:11:53.378377571Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.480847ms grafana | logger=migrator t=2024-04-09T14:11:53.38156122Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-04-09T14:11:53.381630961Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=70.591µs grafana | logger=migrator t=2024-04-09T14:11:53.385714207Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-04-09T14:11:53.390928493Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.213465ms grafana | logger=migrator t=2024-04-09T14:11:53.393748295Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-04-09T14:11:53.398761077Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.012232ms grafana | logger=migrator t=2024-04-09T14:11:53.401529378Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-04-09T14:11:53.406558181Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.028293ms grafana | logger=migrator t=2024-04-09T14:11:53.411741546Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-04-09T14:11:53.4167971Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.055324ms grafana | logger=migrator t=2024-04-09T14:11:53.419885367Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-04-09T14:11:53.420837184Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=951.147µs grafana | logger=migrator t=2024-04-09T14:11:53.423625056Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-04-09T14:11:53.428786561Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.158975ms grafana | logger=migrator t=2024-04-09T14:11:53.432679713Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-04-09T14:11:53.433458517Z level=info msg="Migration successfully executed" id="create server_lock table" duration=778.405µs grafana | logger=migrator t=2024-04-09T14:11:53.436468352Z level=info msg="Executing migration" id="add index server_lock.operation_uid" kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d grafana | logger=migrator t=2024-04-09T14:11:53.437348458Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=879.746µs policy-pap | Waiting for mariadb port 3306... prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.1, branch=HEAD, revision=855b5ac4b80956874eb1790a04c92327f2f99e38)" policy-db-migrator | > upgrade 0820-toscatrigger.sql kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.440448856Z level=info msg="Executing migration" id="create user auth token table" policy-pap | mariadb (172.17.0.3:3306) open prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@d3785d7783f2, date=20240328-09:27:30, tags=netgo,builtinassets,stringlabels)" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.441284521Z level=info msg="Migration successfully executed" id="create user auth token table" duration=833.765µs policy-pap | Waiting for kafka port 9092... prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.446327954Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | kafka (172.17.0.9:9092) open prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.447222291Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=894.147µs policy-pap | Waiting for api port 6969... prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-db-migrator | kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.450137634Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | api (172.17.0.7:6969) open prometheus | ts=2024-04-09T14:11:49.769Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml prometheus | ts=2024-04-09T14:11:49.771Z caller=main.go:1129 level=info msg="Starting TSDB ..." grafana | logger=migrator t=2024-04-09T14:11:53.451057631Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=918.487µs policy-db-migrator | kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json prometheus | ts=2024-04-09T14:11:49.773Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 grafana | logger=migrator t=2024-04-09T14:11:53.454042836Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-pap | prometheus | ts=2024-04-09T14:11:49.773Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 grafana | logger=migrator t=2024-04-09T14:11:53.455174567Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.129651ms policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-pap | . ____ _ __ _ _ prometheus | ts=2024-04-09T14:11:49.778Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" grafana | logger=migrator t=2024-04-09T14:11:53.459093729Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-db-migrator | policy-db-migrator | policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ prometheus | ts=2024-04-09T14:11:49.778Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.081µs policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.468207157Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.113548ms policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ prometheus | ts=2024-04-09T14:11:49.778Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.471534739Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) prometheus | ts=2024-04-09T14:11:49.779Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.472490326Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=955.177µs policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / prometheus | ts=2024-04-09T14:11:49.779Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=193.564µs wal_replay_duration=420.558µs wbl_replay_duration=210ns total_replay_duration=665.933µs policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.475506112Z level=info msg="Executing migration" id="create cache_data table" policy-pap | =========|_|==============|___/=/_/_/_/ prometheus | ts=2024-04-09T14:11:49.783Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.476380058Z level=info msg="Migration successfully executed" id="create cache_data table" duration=873.806µs policy-pap | :: Spring Boot :: (v3.1.8) prometheus | ts=2024-04-09T14:11:49.783Z caller=main.go:1153 level=info msg="TSDB started" policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.480126527Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-pap | prometheus | ts=2024-04-09T14:11:49.783Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.481021214Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=896.207µs policy-pap | [2024-04-09T14:12:14.862+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 32 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) prometheus | ts=2024-04-09T14:11:49.785Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.263803ms db_storage=1.67µs remote_storage=1.71µs web_handler=760ns query_engine=960ns scrape=327.796µs scrape_sd=152.603µs notify=123.292µs notify_sd=11.15µs rules=2.2µs tracing=5.19µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.484131811Z level=info msg="Executing migration" id="create short_url table v1" policy-pap | [2024-04-09T14:12:14.863+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" prometheus | ts=2024-04-09T14:11:49.785Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.485010177Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=878.286µs policy-pap | [2024-04-09T14:12:16.710+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. prometheus | ts=2024-04-09T14:11:49.785Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.487994622Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-pap | [2024-04-09T14:12:16.830+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 108 ms. Found 7 JPA repository interfaces. policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.48894979Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=954.928µs policy-pap | [2024-04-09T14:12:17.212+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.49327668Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" policy-pap | [2024-04-09T14:12:17.212+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.493343311Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=67.382µs policy-pap | [2024-04-09T14:12:17.990+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.496222354Z level=info msg="Executing migration" id="delete alert_definition table" policy-pap | [2024-04-09T14:12:18.001+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.496304905Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=82.911µs policy-pap | [2024-04-09T14:12:18.003+00:00|INFO|StandardService|main] Starting service [Tomcat] kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.498874783Z level=info msg="Executing migration" id="recreate alert_definition table" policy-pap | [2024-04-09T14:12:18.003+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.500187167Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.311694ms policy-pap | [2024-04-09T14:12:18.112+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.504911714Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-pap | [2024-04-09T14:12:18.112+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3155 ms kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.505886632Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=973.168µs policy-pap | [2024-04-09T14:12:18.552+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.508754275Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-pap | [2024-04-09T14:12:18.641+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.510078089Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.322534ms policy-pap | [2024-04-09T14:12:18.644+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.513740517Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-pap | [2024-04-09T14:12:18.684+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.513840489Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=100.902µs policy-pap | [2024-04-09T14:12:19.037+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.517391264Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-pap | [2024-04-09T14:12:19.056+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.518613607Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.223213ms policy-pap | [2024-04-09T14:12:19.176+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2def7a7a kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.522472088Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-pap | [2024-04-09T14:12:19.178+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.523374004Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=901.296µs policy-pap | [2024-04-09T14:12:21.136+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.526541763Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-pap | [2024-04-09T14:12:21.140+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.527552562Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.010579ms policy-pap | [2024-04-09T14:12:21.664+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.532155026Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-pap | [2024-04-09T14:12:22.093+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.533175205Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.020099ms policy-pap | [2024-04-09T14:12:22.203+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.536865483Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-pap | [2024-04-09T14:12:22.477+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.543803611Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.935418ms policy-pap | allow.auto.create.topics = true kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.54698783Z level=info msg="Executing migration" id="drop alert_definition table" policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.547714613Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=725.713µs policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-04-09 14:12:24,844] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.552448111Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | auto.offset.reset = latest kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.552514172Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=66.412µs policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.57683602Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-pap | check.crcs = true kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.577647875Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=814.495µs policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.580409866Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-pap | client.id = consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-1 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.581137789Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=727.713µs policy-pap | client.rack = kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.584727056Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:53.586283314Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.555498ms policy-db-migrator | -------------- policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-09T14:11:53.591793776Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-04-09T14:11:53.591876017Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=83.811µs kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-09T14:11:53.594882743Z level=info msg="Executing migration" id="drop alert_definition_version table" kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-09T14:11:53.595848111Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=964.348µs kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-09T14:11:53.598744234Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-09T14:11:53.599822714Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.07729ms policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | group.id = 8886bf5a-38da-4c7c-af7d-ca09814a22ad policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) grafana | logger=migrator t=2024-04-09T14:11:53.60450559Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-pap | group.instance.id = null policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.605720063Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.214793ms policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-04-09T14:11:53.608748939Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-pap | interceptor.classes = [] policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) grafana | logger=migrator t=2024-04-09T14:11:53.610235696Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.484777ms policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.613394354Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-04-09T14:11:53.62186616Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.471216ms policy-pap | isolation.level = read_uncommitted policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-04-09T14:11:53.627997723Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.629108364Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.112701ms policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) grafana | logger=migrator t=2024-04-09T14:11:53.63326092Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | max.poll.records = 500 policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.635336998Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=2.075458ms policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-04-09T14:11:53.644514008Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | metric.reporters = [] policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-04-09T14:11:53.673306009Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=28.78951ms policy-pap | metrics.num.samples = 2 policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.678382222Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | metrics.recording.level = INFO policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-04-09T14:11:53.715474286Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=37.073774ms policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-04-09T14:11:53.736536484Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.73793911Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.403496ms policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-04-09T14:11:53.743747978Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-04-09T14:11:53.744781706Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.033799ms policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.750364249Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-pap | request.timeout.ms = 30000 policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-04-09T14:11:53.756432251Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.067932ms policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-09T14:11:53.801790908Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.809364177Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.576849ms policy-pap | sasl.jaas.config = null policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-04-09T14:11:53.814701665Z level=info msg="Executing migration" id="create alert_rule table" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-09T14:11:53.815710884Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.008639ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.818464625Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-09T14:11:53.819338651Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=874.286µs policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-09T14:11:53.823269784Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-db-migrator | -------------- policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:53.824032438Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=763.234µs policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-09T14:11:53.827460651Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-09T14:11:53.828729024Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.268273ms policy-db-migrator | -------------- policy-db-migrator | policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:53.841899117Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:53.842004479Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=108.242µs kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-09T14:11:53.857390233Z level=info msg="Executing migration" id="add column for to alert_rule" kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-09T14:11:53.866208735Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.819602ms kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:53.870032516Z level=info msg="Executing migration" id="add column annotations to alert_rule" kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-09T14:11:53.876737339Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.702643ms kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-09T14:11:53.882488085Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-09T14:11:53.889917902Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.429107ms kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.893813544Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-09T14:11:53.894632199Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=818.635µs kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:53.932316473Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:53.93376284Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.453317ms kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.012513111Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-09T14:11:54.02231238Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.797759ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-09T14:11:54.027098437Z level=info msg="Executing migration" id="add panel_id column to alert_rule" kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-09T14:11:54.034728656Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.627669ms kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-09T14:11:54.043144079Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-09T14:11:54.044173808Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.030219ms policy-db-migrator | kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-09T14:11:54.050435742Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-09T14:11:54.058217504Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.780912ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | security.providers = null grafana | logger=migrator t=2024-04-09T14:11:54.060900233Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-09T14:11:54.065760101Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.860378ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-09T14:11:54.068392409Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-db-migrator | kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.06844137Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=49.381µs policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | ssl.cipher.suites = null kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.073302449Z level=info msg="Executing migration" id="create alert_rule_version table" policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-04-09 14:12:24,987] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.074133134Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=830.325µs policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- kafka | [2024-04-09 14:12:24,987] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.077557447Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-pap | ssl.engine.factory.class = null policy-db-migrator | kafka | [2024-04-09 14:12:24,987] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.078617376Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.059859ms kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-04-09T14:11:54.083448684Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.084952252Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.503237ms kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-09T14:11:54.094396093Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.094532186Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=137.603µs kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.09804834Z level=info msg="Executing migration" id="add column for to alert_rule_version" kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.104961876Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.913516ms kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.111626618Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-pap | ssl.provider = null policy-db-migrator | -------------- kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.116739271Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.113434ms policy-pap | ssl.secure.random.implementation = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-04-09T14:11:54.119930589Z level=info msg="Executing migration" id="add column labels to alert_rule_version" kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.12605463Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.123061ms kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.131160033Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.140652706Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=9.491103ms kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | > upgrade 0100-pdp.sql grafana | logger=migrator t=2024-04-09T14:11:54.145126258Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.152404441Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.278673ms kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY grafana | logger=migrator t=2024-04-09T14:11:54.156454655Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.156528916Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=78.711µs kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | [2024-04-09T14:12:22.634+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.161632489Z level=info msg="Executing migration" id=create_alert_configuration_table kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-pap | [2024-04-09T14:12:22.635+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.162449674Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=817.565µs kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-pap | [2024-04-09T14:12:22.635+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671942633 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-04-09T14:11:54.205446397Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-pap | [2024-04-09T14:12:22.637+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-1, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.214625955Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.184798ms kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-04-09T14:12:22.638+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-04-09T14:11:54.218526146Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-pap | allow.auto.create.topics = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.219006785Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=481.099µs kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.222978607Z level=info msg="Executing migration" id="add column org_id in alert_configuration" kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.229161099Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.182302ms kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-pap | auto.offset.reset = latest policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql grafana | logger=migrator t=2024-04-09T14:11:54.238600241Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.239927816Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.324925ms kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-pap | check.crcs = true policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-04-09T14:11:54.244376077Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.252716719Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.340292ms policy-pap | client.id = consumer-policy-pap-2 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.257072098Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-pap | client.rack = kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.25772058Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=648.712µs policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-09 14:12:24,996] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | > upgrade 0130-pdpstatistics.sql grafana | logger=migrator t=2024-04-09T14:11:54.261710513Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-pap | default.api.timeout.ms = 60000 kafka | [2024-04-09 14:12:24,996] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.262539218Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=828.705µs policy-pap | enable.auto.commit = true kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL grafana | logger=migrator t=2024-04-09T14:11:54.265442611Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-pap | exclude.internal.topics = true kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.27199055Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.547699ms policy-pap | fetch.max.bytes = 52428800 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.277793196Z level=info msg="Executing migration" id="create provenance_type table" policy-pap | fetch.max.wait.ms = 500 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.278628541Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=835.155µs policy-pap | fetch.min.bytes = 1 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql grafana | logger=migrator t=2024-04-09T14:11:54.284276174Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-pap | group.id = policy-pap kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.286004316Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.727272ms policy-pap | group.instance.id = null kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num grafana | logger=migrator t=2024-04-09T14:11:54.290663301Z level=info msg="Executing migration" id="create alert_image table" policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.291664169Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.000318ms policy-pap | interceptor.classes = [] kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.295130972Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-pap | internal.leave.group.on.close = true kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.296248222Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.11706ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) grafana | logger=migrator t=2024-04-09T14:11:54.30051651Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-pap | isolation.level = read_uncommitted kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.300608911Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=93.141µs policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.305599453Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.306660702Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.06116ms policy-pap | max.poll.interval.ms = 300000 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | > upgrade 0150-pdpstatistics.sql grafana | logger=migrator t=2024-04-09T14:11:54.311331897Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-pap | max.poll.records = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.312885605Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.553858ms kafka | [2024-04-09 14:12:24,998] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL grafana | logger=migrator t=2024-04-09T14:11:54.317675993Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-04-09 14:12:24,998] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.318450607Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-04-09 14:12:24,998] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.322299087Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-04-09 14:12:25,000] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.322757745Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=459.458µs kafka | [2024-04-09 14:12:25,006] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql kafka | [2024-04-09 14:12:25,009] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-04-09T14:11:54.326485573Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-04-09T14:11:54.327597603Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.11124ms policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-04-09T14:11:54.332022964Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-04-09T14:11:54.343356131Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=11.333637ms policy-db-migrator | kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.348443693Z level=info msg="Executing migration" id="create library_element table v1" policy-db-migrator | kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-04-09T14:11:54.349213797Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.289473ms policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-09T14:11:54.35266416Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-04-09T14:11:54.35374203Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.07778ms policy-db-migrator | UPDATE jpapdpstatistics_enginestats a kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-04-09T14:11:54.358954005Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-db-migrator | JOIN pdpstatistics b kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-04-09T14:11:54.35978673Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=830.605µs policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-09T14:11:54.362750804Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-db-migrator | SET a.id = b.id kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-09T14:11:54.363818523Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.067349ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-09T14:11:54.36691402Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-db-migrator | kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:54.367965869Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.051259ms policy-db-migrator | kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-09T14:11:54.371921291Z level=info msg="Executing migration" id="increase max description length to 2048" policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-09T14:11:54.372005983Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=83.362µs policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:54.375287233Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:54.375376894Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=89.251µs policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-09T14:11:54.382130097Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-db-migrator | kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-09T14:11:54.382466173Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=360.756µs policy-db-migrator | kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:54.389453901Z level=info msg="Executing migration" id="create data_keys table" policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-09T14:11:54.391279744Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.829243ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-09T14:11:54.395781606Z level=info msg="Executing migration" id="create secrets table" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-09T14:11:54.396886686Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.10432ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-09T14:11:54.400084794Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-db-migrator | kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-09T14:11:54.43333202Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.247456ms policy-db-migrator | kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-09T14:11:54.43662772Z level=info msg="Executing migration" id="add name column into data_keys" policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-09T14:11:54.442452547Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.822747ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-09T14:11:54.447402677Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-09T14:11:54.44754581Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=143.422µs policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-09T14:11:54.450709077Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-09T14:11:54.484048125Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.340798ms policy-db-migrator | kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-09T14:11:54.486954388Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-db-migrator | > upgrade 0210-sequence.sql kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-09T14:11:54.515815904Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=28.860496ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-09T14:11:54.519173715Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.519842807Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=669.172µs policy-pap | security.providers = null kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.52382377Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.524864879Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.040488ms policy-pap | session.timeout.ms = 45000 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0220-sequence.sql grafana | logger=migrator t=2024-04-09T14:11:54.529033984Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.529239508Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=205.854µs policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-04-09T14:11:54.532219552Z level=info msg="Executing migration" id="create permission table" policy-pap | ssl.cipher.suites = null kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.533701519Z level=info msg="Migration successfully executed" id="create permission table" duration=1.481567ms policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.538864934Z level=info msg="Executing migration" id="add unique index permission.role_id" policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.540383371Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.518247ms policy-pap | ssl.engine.factory.class = null kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-04-09T14:11:54.543986387Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-pap | ssl.key.password = null kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.544995215Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.008318ms policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-04-09T14:11:54.548051941Z level=info msg="Executing migration" id="create role table" policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.54909552Z level=info msg="Migration successfully executed" id="create role table" duration=1.040479ms policy-pap | ssl.keystore.key = null kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.553029252Z level=info msg="Executing migration" id="add column display_name" policy-pap | ssl.keystore.location = null kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.560827614Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.797772ms policy-pap | ssl.keystore.password = null kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-04-09T14:11:54.564077053Z level=info msg="Executing migration" id="add column group_name" policy-pap | ssl.keystore.type = JKS kafka | [2024-04-09 14:12:25,012] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.571251934Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.173861ms policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-09 14:12:25,017] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) grafana | logger=migrator t=2024-04-09T14:11:54.57705257Z level=info msg="Executing migration" id="add index role.org_id" policy-pap | ssl.provider = null kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.578007797Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=954.767µs policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.621886336Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.624231539Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=2.345553ms policy-pap | ssl.truststore.certificates = null kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0120-toscatrigger.sql grafana | logger=migrator t=2024-04-09T14:11:54.629115748Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-pap | ssl.truststore.location = null kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.630459563Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.342765ms policy-pap | ssl.truststore.password = null kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger grafana | logger=migrator t=2024-04-09T14:11:54.633802274Z level=info msg="Executing migration" id="create team role table" policy-pap | ssl.truststore.type = JKS kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.63471886Z level=info msg="Migration successfully executed" id="create team role table" duration=917.186µs policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.641126647Z level=info msg="Executing migration" id="add index team_role.org_id" policy-pap | kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.642272458Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.145751ms policy-pap | [2024-04-09T14:12:22.643+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-04-09T14:11:54.650515759Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-pap | [2024-04-09T14:12:22.644+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.652383193Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.863194ms policy-pap | [2024-04-09T14:12:22.644+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671942643 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB grafana | logger=migrator t=2024-04-09T14:11:54.655817205Z level=info msg="Executing migration" id="add index team_role.team_id" kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:22.644+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.656911815Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.09405ms policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.660626602Z level=info msg="Executing migration" id="create user role table" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:22.985+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.661512439Z level=info msg="Migration successfully executed" id="create user role table" duration=885.267µs kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:23.119+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | > upgrade 0140-toscaparameter.sql grafana | logger=migrator t=2024-04-09T14:11:54.666997139Z level=info msg="Executing migration" id="add index user_role.org_id" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:23.353+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@53917c92, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1fa796a4, org.springframework.security.web.context.SecurityContextHolderFilter@1f013047, org.springframework.security.web.header.HeaderWriterFilter@ce0bbd5, org.springframework.security.web.authentication.logout.LogoutFilter@44c2e8a8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4fbbd98c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@51566ce0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@17e6d07b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@68de8522, org.springframework.security.web.access.ExceptionTranslationFilter@1f7557fe, org.springframework.security.web.access.intercept.AuthorizationFilter@3879feec] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.668248321Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.253772ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:24.127+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | DROP TABLE IF EXISTS toscaparameter grafana | logger=migrator t=2024-04-09T14:11:54.671491461Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:24.226+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.672583101Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.09104ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:24.246+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.677759395Z level=info msg="Executing migration" id="add index user_role.user_id" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:24.268+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | policy-pap | [2024-04-09T14:12:24.268+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry grafana | logger=migrator t=2024-04-09T14:11:54.679863583Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.104548ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-pap | [2024-04-09T14:12:24.269+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters grafana | logger=migrator t=2024-04-09T14:11:54.685663639Z level=info msg="Executing migration" id="create builtin role table" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-09T14:12:24.270+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener grafana | logger=migrator t=2024-04-09T14:11:54.687066025Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.402256ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-pap | [2024-04-09T14:12:24.270+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher grafana | logger=migrator t=2024-04-09T14:11:54.695985027Z level=info msg="Executing migration" id="add index builtin_role.role_id" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-09T14:12:24.270+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher grafana | logger=migrator t=2024-04-09T14:11:54.697468544Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.510938ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-04-09T14:12:24.270+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher grafana | logger=migrator t=2024-04-09T14:11:54.701536628Z level=info msg="Executing migration" id="add index builtin_role.name" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-04-09T14:12:24.274+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8886bf5a-38da-4c7c-af7d-ca09814a22ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3ff3275b grafana | logger=migrator t=2024-04-09T14:11:54.702899003Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.361605ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-pap | [2024-04-09T14:12:24.284+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8886bf5a-38da-4c7c-af7d-ca09814a22ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-04-09T14:11:54.757776893Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-04-09T14:12:24.285+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-04-09T14:11:54.768630081Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=10.856348ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-04-09T14:11:54.773777885Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-04-09T14:11:54.774876785Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.09879ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-04-09T14:11:54.779944517Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-04-09T14:11:54.78117412Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.229053ms policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-04-09T14:11:54.784348987Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-db-migrator | kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | check.crcs = true grafana | logger=migrator t=2024-04-09T14:11:54.785653471Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.303924ms policy-db-migrator | kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql grafana | logger=migrator t=2024-04-09T14:11:54.78997545Z level=info msg="Executing migration" id="add unique index role.uid" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | client.id = consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.79108953Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.11447ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | client.rack = policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY grafana | logger=migrator t=2024-04-09T14:11:54.794699886Z level=info msg="Executing migration" id="create seed assignment table" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.795552731Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=852.205µs kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.798487095Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | enable.auto.commit = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.799638366Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.150171ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | exclude.internal.topics = true policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) grafana | logger=migrator t=2024-04-09T14:11:54.803004037Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.811464042Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.457645ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.815878332Z level=info msg="Executing migration" id="permission kind migration" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | fetch.min.bytes = 1 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.824317176Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.440124ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | group.id = 8886bf5a-38da-4c7c-af7d-ca09814a22ad policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql grafana | logger=migrator t=2024-04-09T14:11:54.828671505Z level=info msg="Executing migration" id="permission attribute migration" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | group.instance.id = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.834247257Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.573212ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-04-09T14:11:54.838467584Z level=info msg="Executing migration" id="permission identifier migration" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | interceptor.classes = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.847774503Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.306569ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | internal.leave.group.on.close = true policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.852209214Z level=info msg="Executing migration" id="add permission identifier index" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.853786623Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.577049ms kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | isolation.level = read_uncommitted policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) grafana | logger=migrator t=2024-04-09T14:11:54.857950579Z level=info msg="Executing migration" id="add permission action scope role_id index" kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.859389235Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.437746ms kafka | [2024-04-09 14:12:25,050] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.862746366Z level=info msg="Executing migration" id="remove permission role_id action scope index" kafka | [2024-04-09 14:12:25,050] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.863792365Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.045909ms kafka | [2024-04-09 14:12:25,050] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | max.poll.records = 500 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql grafana | logger=migrator t=2024-04-09T14:11:54.867696326Z level=info msg="Executing migration" id="create query_history table v1" kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.868939379Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.242493ms kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT grafana | logger=migrator t=2024-04-09T14:11:54.875918956Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.877543836Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.62391ms kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.887442646Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-pap | metrics.recording.level = INFO kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.887583169Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=144.283µs policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | > upgrade 0100-upgrade.sql grafana | logger=migrator t=2024-04-09T14:11:54.892445787Z level=info msg="Executing migration" id="rbac disabled migrator" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.892509368Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=74.271µs policy-pap | receive.buffer.bytes = 65536 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.898251833Z level=info msg="Executing migration" id="teams permissions migration" policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.899003077Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=753.954µs policy-pap | request.timeout.ms = 30000 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | msg grafana | logger=migrator t=2024-04-09T14:11:54.902732645Z level=info msg="Executing migration" id="dashboard permissions" policy-pap | retry.backoff.ms = 100 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | upgrade to 1100 completed grafana | logger=migrator t=2024-04-09T14:11:54.903755724Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.027778ms policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.908905587Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-pap | sasl.jaas.config = null kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql grafana | logger=migrator t=2024-04-09T14:11:54.90960529Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=699.533µs policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.912993922Z level=info msg="Executing migration" id="drop managed folder create actions" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME grafana | logger=migrator t=2024-04-09T14:11:54.913222926Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=229.334µs policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-04-09T14:11:54.918200827Z level=info msg="Executing migration" id="alerting notification permissions" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-04-09T14:11:54.918682365Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=479.608µs kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:54.924609713Z level=info msg="Executing migration" id="create query_history_star table v1" kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-09T14:11:54.925887667Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.278134ms kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-09T14:11:54.929313799Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:54.931149283Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.834714ms policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:54.935749537Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-09T14:11:54.944260862Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.510695ms policy-db-migrator | kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-09T14:11:54.949449526Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:54.949533618Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=83.162µs policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-09T14:11:54.952808257Z level=info msg="Executing migration" id="create correlation table v1" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-09T14:11:54.953814856Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.006369ms policy-db-migrator | kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-09T14:11:54.958246906Z level=info msg="Executing migration" id="add index correlations.uid" policy-db-migrator | kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-09T14:11:54.959408238Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.158561ms policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-09T14:11:54.971089301Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-09T14:11:54.972512567Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.423706ms policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-09T14:11:54.976790945Z level=info msg="Executing migration" id="add correlation config column" kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.983789712Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.997687ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.987186594Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.988424066Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.236382ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.992933559Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.995566767Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.630658ms policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:54.999359146Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.023215234Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.855908ms policy-pap | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.036338757Z level=info msg="Executing migration" id="create correlation v2" policy-pap | security.providers = null policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.03866815Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.327053ms policy-pap | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.044055629Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.045075128Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.019699ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.048225496Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,052] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.049325287Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.099391ms policy-pap | ssl.cipher.suites = null policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-04-09 14:12:25,052] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.053370151Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,053] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-04-09T14:11:55.055518761Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.14867ms policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | kafka | [2024-04-09 14:12:25,053] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.058751381Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,109] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.059025346Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=274.155µs policy-pap | ssl.key.password = null policy-db-migrator | TRUNCATE TABLE sequence kafka | [2024-04-09 14:12:25,128] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.064313064Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,130] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.065051637Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=735.863µs policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | kafka | [2024-04-09 14:12:25,131] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.071529927Z level=info msg="Executing migration" id="add provisioning column" policy-pap | ssl.keystore.key = null policy-db-migrator | kafka | [2024-04-09 14:12:25,132] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.079541925Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.010618ms policy-pap | ssl.keystore.location = null policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-04-09 14:12:25,151] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.082902297Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,152] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-04-09T14:11:55.083502088Z level=info msg="Migration successfully executed" id="create entity_events table" duration=599.931µs policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-04-09 14:12:25,152] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-04-09T14:11:55.087264718Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,152] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-04-09T14:11:55.088313617Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.047939ms policy-db-migrator | kafka | [2024-04-09 14:12:25,152] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-04-09T14:11:55.09174274Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,159] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-04-09T14:11:55.092169818Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | DROP TABLE pdpstatistics kafka | [2024-04-09 14:12:25,160] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-04-09T14:11:55.095333767Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,160] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-09T14:11:55.095757585Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | kafka | [2024-04-09 14:12:25,160] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.101284787Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-db-migrator | kafka | [2024-04-09 14:12:25,160] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-09T14:11:55.102664812Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.382825ms policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql kafka | [2024-04-09 14:12:25,168] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-04-09T14:11:55.106965772Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,168] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-09T14:11:55.108199385Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.234543ms policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats kafka | [2024-04-09 14:12:25,168] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-09T14:11:55.112494674Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | -------------- kafka | [2024-04-09 14:12:25,168] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | grafana | logger=migrator t=2024-04-09T14:11:55.113723897Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.230143ms policy-db-migrator | policy-pap | [2024-04-09T14:12:24.291+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-09 14:12:25,168] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.11769374Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | policy-pap | [2024-04-09T14:12:24.291+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-09 14:12:25,174] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.119104566Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.411076ms policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-pap | [2024-04-09T14:12:24.291+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671944291 kafka | [2024-04-09 14:12:25,174] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.123166581Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-db-migrator | -------------- policy-pap | [2024-04-09T14:12:24.291+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Subscribed to topic(s): policy-pdp-pap kafka | [2024-04-09 14:12:25,175] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.124763001Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.59537ms policy-db-migrator | DROP TABLE statistics_sequence policy-pap | [2024-04-09T14:12:24.292+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher kafka | [2024-04-09 14:12:25,175] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.12958059Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | -------------- policy-pap | [2024-04-09T14:12:24.292+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=6736d7e9-6714-4f8e-b97c-2edf4d38cb1b, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2ea0161f kafka | [2024-04-09 14:12:25,175] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.1312513Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.6673ms policy-pap | [2024-04-09T14:12:24.292+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=6736d7e9-6714-4f8e-b97c-2edf4d38cb1b, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-04-09 14:12:25,181] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | kafka | [2024-04-09 14:12:25,181] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-09T14:12:24.292+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-04-09T14:11:55.135380577Z level=info msg="Executing migration" id="Drop public config table" policy-db-migrator | policyadmin: OK: upgrade (1300) kafka | [2024-04-09 14:12:25,181] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-pap | allow.auto.create.topics = true policy-db-migrator | name version kafka | [2024-04-09 14:12:25,181] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.136264193Z level=info msg="Migration successfully executed" id="Drop public config table" duration=883.916µs policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | policyadmin 1300 kafka | [2024-04-09 14:12:25,182] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.140498541Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | ID script operation from_version to_version tag success atTime kafka | [2024-04-09 14:12:25,192] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.14207118Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.573979ms policy-pap | auto.offset.reset = latest policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,193] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.148069581Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,193] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.150247011Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.17847ms policy-pap | check.crcs = true policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,193] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.154730674Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,194] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.155998598Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.268524ms policy-pap | client.id = consumer-policy-pap-4 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,207] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | client.rack = grafana | logger=migrator t=2024-04-09T14:11:55.158912192Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,208] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-09T14:11:55.160038762Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.12706ms policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,208] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-04-09T14:11:55.165272469Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,208] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-04-09T14:11:55.19021667Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.940481ms policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,208] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-04-09T14:11:55.195919305Z level=info msg="Executing migration" id="add annotations_enabled column" policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,227] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-04-09T14:11:55.201967047Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.047382ms policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,228] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-04-09T14:11:55.204994863Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,228] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-04-09T14:11:55.213667753Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.66695ms policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,228] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-04-09T14:11:55.21672823Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,228] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | group.instance.id = null grafana | logger=migrator t=2024-04-09T14:11:55.216969764Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=241.724µs policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,236] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-04-09T14:11:55.220958038Z level=info msg="Executing migration" id="add share column" policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,238] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-09T14:11:55.229939624Z level=info msg="Migration successfully executed" id="add share column" duration=8.979086ms policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,239] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-04-09T14:11:55.233043281Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 kafka | [2024-04-09 14:12:25,239] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-04-09T14:11:55.233250225Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=207.294µs kafka | [2024-04-09 14:12:25,239] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | isolation.level = read_uncommitted policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 grafana | logger=migrator t=2024-04-09T14:11:55.236090828Z level=info msg="Executing migration" id="create file table" kafka | [2024-04-09 14:12:25,249] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 grafana | logger=migrator t=2024-04-09T14:11:55.237076306Z level=info msg="Migration successfully executed" id="create file table" duration=984.988µs kafka | [2024-04-09 14:12:25,250] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 grafana | logger=migrator t=2024-04-09T14:11:55.240191543Z level=info msg="Executing migration" id="file table idx: path natural pk" kafka | [2024-04-09 14:12:25,250] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 grafana | logger=migrator t=2024-04-09T14:11:55.241321324Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.127491ms kafka | [2024-04-09 14:12:25,250] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | max.poll.records = 500 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.245234317Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.246392028Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.158341ms kafka | [2024-04-09 14:12:25,250] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.249297932Z level=info msg="Executing migration" id="create file_meta table" kafka | [2024-04-09 14:12:25,263] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | metrics.num.samples = 2 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.25029423Z level=info msg="Migration successfully executed" id="create file_meta table" duration=995.578µs kafka | [2024-04-09 14:12:25,264] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | metrics.recording.level = INFO policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.253469139Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-04-09 14:12:25,264] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.254830194Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.360695ms kafka | [2024-04-09 14:12:25,265] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.262877823Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-04-09 14:12:25,265] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.262953074Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=76.311µs kafka | [2024-04-09 14:12:25,277] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.272225605Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" kafka | [2024-04-09 14:12:25,278] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.272369998Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=148.053µs kafka | [2024-04-09 14:12:25,278] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | request.timeout.ms = 30000 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.275252121Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-04-09 14:12:25,279] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | retry.backoff.ms = 100 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.275910583Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=663.532µs kafka | [2024-04-09 14:12:25,279] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.279158443Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-04-09 14:12:25,286] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.jaas.config = null policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.279354807Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=196.464µs kafka | [2024-04-09 14:12:25,286] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.283497644Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-04-09 14:12:25,286] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.285552092Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.054357ms kafka | [2024-04-09 14:12:25,286] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 grafana | logger=migrator t=2024-04-09T14:11:55.289828071Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-04-09 14:12:25,286] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-09T14:11:55.302201729Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=12.374139ms policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 kafka | [2024-04-09 14:12:25,294] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:55.306989488Z level=info msg="Executing migration" id="Update uid column values in playlist" policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 kafka | [2024-04-09 14:12:25,295] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-09T14:11:55.307164481Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=176.453µs policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 kafka | [2024-04-09 14:12:25,295] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-09T14:11:55.313891955Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 kafka | [2024-04-09 14:12:25,295] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:55.315140158Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.248233ms policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 kafka | [2024-04-09 14:12:25,295] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:55.318027482Z level=info msg="Executing migration" id="update group index for alert rules" policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 kafka | [2024-04-09 14:12:25,305] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-09T14:11:55.318429329Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=402.787µs policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,305] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-09T14:11:55.322067246Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,305] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:55.322285Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=219.624µs policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,306] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-04-09T14:11:55.325393768Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,306] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-04-09T14:11:55.32604528Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=651.362µs policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,312] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-09T14:11:55.330445911Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,313] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-04-09T14:11:55.340067479Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.618418ms policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,313] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-04-09T14:11:55.346374115Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,313] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-04-09T14:11:55.356218437Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.842102ms policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,313] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-04-09T14:11:55.35959928Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,321] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-04-09T14:11:55.36068917Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.09002ms policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,321] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 grafana | logger=migrator t=2024-04-09T14:11:55.364728544Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-04-09 14:12:25,321] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-04-09T14:11:55.460706218Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=95.972934ms kafka | [2024-04-09 14:12:25,321] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-04-09T14:11:55.468991271Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,321] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-04-09T14:11:55.47000591Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.016219ms policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 kafka | [2024-04-09 14:12:25,328] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-04-09T14:11:55.504849314Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-04-09T14:11:55.506981013Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.13346ms policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-04-09T14:11:55.513122236Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | security.providers = null grafana | logger=migrator t=2024-04-09T14:11:55.541509221Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.383555ms policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-04-09T14:11:55.549076021Z level=info msg="Executing migration" id="add origin column to seed_assignment" policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-04-09T14:11:55.555492709Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.416488ms policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-09 14:12:25,328] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.559948072Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-09 14:12:25,328] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.560281978Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=335.286µs policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 policy-pap | ssl.cipher.suites = null kafka | [2024-04-09 14:12:25,329] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.567566272Z level=info msg="Executing migration" id="prevent seeding OnCall access" policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-04-09T14:11:55.567803897Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=241.305µs policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-09 14:12:25,329] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.570907794Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.engine.factory.class = null kafka | [2024-04-09 14:12:25,335] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.571124078Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=216.664µs policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.key.password = null kafka | [2024-04-09 14:12:25,335] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.574357728Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-09 14:12:25,335] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.574566022Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=209.184µs policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-09 14:12:25,335] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.581155133Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.keystore.key = null kafka | [2024-04-09 14:12:25,335] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.581373217Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=219.404µs policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.keystore.location = null kafka | [2024-04-09 14:12:25,346] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.591866521Z level=info msg="Executing migration" id="create folder table" policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.keystore.password = null kafka | [2024-04-09 14:12:25,346] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.593098594Z level=info msg="Migration successfully executed" id="create folder table" duration=1.234063ms policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.keystore.type = JKS kafka | [2024-04-09 14:12:25,346] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.596073909Z level=info msg="Executing migration" id="Add index for parent_uid" policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-09 14:12:25,346] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.597649698Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.574969ms policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 kafka | [2024-04-09 14:12:25,347] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.601886776Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.provider = null kafka | [2024-04-09 14:12:25,352] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.603316683Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.429627ms policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-09 14:12:25,353] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.606716226Z level=info msg="Executing migration" id="Update folder title length" policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-04-09T14:11:55.606882729Z level=info msg="Migration successfully executed" id="Update folder title length" duration=166.383µs policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 kafka | [2024-04-09 14:12:25,353] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-04-09T14:11:55.609708891Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 kafka | [2024-04-09 14:12:25,353] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-04-09T14:11:55.611072476Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.364885ms policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 kafka | [2024-04-09 14:12:25,353] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-04-09T14:11:55.616265242Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 kafka | [2024-04-09 14:12:25,361] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-04-09T14:11:55.617611697Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.344905ms kafka | [2024-04-09 14:12:25,362] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | grafana | logger=migrator t=2024-04-09T14:11:55.620882197Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 kafka | [2024-04-09 14:12:25,362] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-pap | [2024-04-09T14:12:24.297+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-04-09T14:11:55.623071718Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.189121ms policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 kafka | [2024-04-09 14:12:25,362] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-09T14:12:24.297+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-04-09T14:11:55.628315485Z level=info msg="Executing migration" id="Sync dashboard and folder table" policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 kafka | [2024-04-09 14:12:25,363] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-09T14:12:24.297+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671944297 grafana | logger=migrator t=2024-04-09T14:11:55.628863725Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=547.93µs policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 kafka | [2024-04-09 14:12:25,369] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-09T14:12:24.297+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-04-09T14:11:55.634714943Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 policy-pap | [2024-04-09T14:12:24.298+00:00|INFO|ServiceManager|main] Policy PAP starting topics grafana | logger=migrator t=2024-04-09T14:11:55.63562989Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=915.407µs policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 kafka | [2024-04-09 14:12:25,369] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.638646956Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 policy-pap | [2024-04-09T14:12:24.298+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=6736d7e9-6714-4f8e-b97c-2edf4d38cb1b, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-04-09 14:12:25,369] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.640538711Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.892075ms policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 policy-pap | [2024-04-09T14:12:24.298+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8886bf5a-38da-4c7c-af7d-ca09814a22ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-04-09 14:12:25,369] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.643692639Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 policy-pap | [2024-04-09T14:12:24.298+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=564a2e6e-474f-4e32-b0d5-9fb32de5e450, alive=false, publisher=null]]: starting kafka | [2024-04-09 14:12:25,369] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.644976423Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.283304ms policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 policy-pap | [2024-04-09T14:12:24.313+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-04-09 14:12:25,377] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.651029764Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 policy-pap | acks = -1 kafka | [2024-04-09 14:12:25,377] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.652467251Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.437707ms policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-09 14:12:25,377] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.655330434Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 grafana | logger=migrator t=2024-04-09T14:11:55.656631458Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.300814ms kafka | [2024-04-09 14:12:25,378] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 grafana | logger=migrator t=2024-04-09T14:11:55.660263915Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" policy-pap | batch.size = 16384 kafka | [2024-04-09 14:12:25,378] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 grafana | logger=migrator t=2024-04-09T14:11:55.661449077Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.185662ms policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-09 14:12:25,384] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 grafana | logger=migrator t=2024-04-09T14:11:55.665818208Z level=info msg="Executing migration" id="create anon_device table" policy-pap | buffer.memory = 33554432 kafka | [2024-04-09 14:12:25,384] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 grafana | logger=migrator t=2024-04-09T14:11:55.667450638Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.63199ms policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-09 14:12:25,384] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 policy-pap | client.id = producer-1 grafana | logger=migrator t=2024-04-09T14:11:55.673641812Z level=info msg="Executing migration" id="add unique index anon_device.device_id" kafka | [2024-04-09 14:12:25,384] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:59 policy-pap | compression.type = none grafana | logger=migrator t=2024-04-09T14:11:55.675062849Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.421107ms kafka | [2024-04-09 14:12:25,385] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:59 policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-04-09T14:11:55.679955969Z level=info msg="Executing migration" id="add index anon_device.updated_at" kafka | [2024-04-09 14:12:25,395] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:59 policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-04-09T14:11:55.682038927Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.079678ms kafka | [2024-04-09 14:12:25,396] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-04-09T14:11:55.685279127Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-04-09 14:12:25,396] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-04-09T14:11:55.686381578Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.105011ms kafka | [2024-04-09 14:12:25,396] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-04-09T14:11:55.689996694Z level=info msg="Executing migration" id="add unique index signing_key.key_id" kafka | [2024-04-09 14:12:25,396] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 policy-pap | linger.ms = 0 kafka | [2024-04-09 14:12:25,407] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 grafana | logger=migrator t=2024-04-09T14:11:55.691584014Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.58709ms policy-pap | max.block.ms = 60000 kafka | [2024-04-09 14:12:25,407] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 grafana | logger=migrator t=2024-04-09T14:11:55.696101797Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-04-09 14:12:25,407] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.697374691Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.272874ms policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 policy-pap | max.request.size = 1048576 kafka | [2024-04-09 14:12:25,407] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.700331865Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-09 14:12:25,408] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.700855385Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=464.759µs policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-04-09 14:12:25,418] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.706214424Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0904241411541100u 1 2024-04-09 14:11:59 policy-pap | metric.reporters = [] kafka | [2024-04-09 14:12:25,419] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.718114424Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.88318ms policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0904241411541200u 1 2024-04-09 14:11:59 policy-pap | metrics.num.samples = 2 kafka | [2024-04-09 14:12:25,419] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.722365383Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0904241411541200u 1 2024-04-09 14:11:59 policy-pap | metrics.recording.level = INFO kafka | [2024-04-09 14:12:25,419] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.723211978Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=847.365µs policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0904241411541200u 1 2024-04-09 14:11:59 policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-09 14:12:25,419] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.726049231Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0904241411541200u 1 2024-04-09 14:11:59 policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-04-09 14:12:25,431] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.727174191Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.12486ms policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0904241411541300u 1 2024-04-09 14:11:59 policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-04-09 14:12:25,432] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.730235958Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0904241411541300u 1 2024-04-09 14:11:59 policy-pap | partitioner.class = null kafka | [2024-04-09 14:12:25,432] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.731526451Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.290433ms policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0904241411541300u 1 2024-04-09 14:12:00 policy-pap | partitioner.ignore.keys = false kafka | [2024-04-09 14:12:25,432] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.734315773Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" policy-db-migrator | policyadmin: OK @ 1300 policy-pap | receive.buffer.bytes = 32768 kafka | [2024-04-09 14:12:25,432] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.735554876Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.239353ms policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-09 14:12:25,441] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.741329062Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-09 14:12:25,442] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.742678007Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.348715ms policy-pap | request.timeout.ms = 30000 kafka | [2024-04-09 14:12:25,442] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.747379494Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" policy-pap | retries = 2147483647 kafka | [2024-04-09 14:12:25,442] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.749568905Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.188211ms policy-pap | retry.backoff.ms = 100 kafka | [2024-04-09 14:12:25,442] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-04-09T14:11:55.753899705Z level=info msg="Executing migration" id="create sso_setting table" policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-09 14:12:25,458] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-04-09T14:11:55.755983313Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.076448ms policy-pap | sasl.jaas.config = null kafka | [2024-04-09 14:12:25,460] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-04-09T14:11:55.760752261Z level=info msg="Executing migration" id="copy kvstore migration status to each org" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-09 14:12:25,460] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) grafana | logger=migrator t=2024-04-09T14:11:55.762710028Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.959146ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-04-09T14:11:55.765623971Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" kafka | [2024-04-09 14:12:25,460] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-04-09T14:11:55.765993188Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=370.217µs kafka | [2024-04-09 14:12:25,460] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-04-09T14:11:55.770073404Z level=info msg="Executing migration" id="alter kv_store.value to longtext" kafka | [2024-04-09 14:12:25,468] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:55.770264087Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=189.983µs kafka | [2024-04-09 14:12:25,469] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-04-09T14:11:55.772883436Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" kafka | [2024-04-09 14:12:25,469] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-04-09T14:11:55.782247229Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.363183ms kafka | [2024-04-09 14:12:25,469] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:55.786542798Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" kafka | [2024-04-09 14:12:25,469] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-04-09T14:11:55.795782179Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.238421ms kafka | [2024-04-09 14:12:25,483] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-04-09T14:11:55.79854781Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" kafka | [2024-04-09 14:12:25,484] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-04-09T14:11:55.799012898Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=464.768µs kafka | [2024-04-09 14:12:25,484] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-04-09T14:11:55.804019341Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.957264176s kafka | [2024-04-09 14:12:25,484] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=sqlstore t=2024-04-09T14:11:55.813678179Z level=info msg="Created default admin" user=admin kafka | [2024-04-09 14:12:25,484] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=sqlstore t=2024-04-09T14:11:55.814060466Z level=info msg="Created default organization" kafka | [2024-04-09 14:12:25,496] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=secrets t=2024-04-09T14:11:55.819777382Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 kafka | [2024-04-09 14:12:25,497] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-pap | sasl.mechanism = GSSAPI grafana | logger=plugin.store t=2024-04-09T14:11:55.840590977Z level=info msg="Loading plugins..." kafka | [2024-04-09 14:12:25,498] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=local.finder t=2024-04-09T14:11:55.883925877Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-04-09 14:12:25,498] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=plugin.store t=2024-04-09T14:11:55.884031579Z level=info msg="Plugins loaded" count=55 duration=43.424362ms kafka | [2024-04-09 14:12:25,499] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(ITmYpZ6rSK-iF5o_1J2T3Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=query_data t=2024-04-09T14:11:55.886518695Z level=info msg="Query Service initialization" kafka | [2024-04-09 14:12:25,538] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=live.push_http t=2024-04-09T14:11:55.890193963Z level=info msg="Live Push Gateway initialization" kafka | [2024-04-09 14:12:25,538] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=ngalert.migration t=2024-04-09T14:11:55.896122553Z level=info msg=Starting kafka | [2024-04-09 14:12:25,538] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=ngalert.migration t=2024-04-09T14:11:55.89652894Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false kafka | [2024-04-09 14:12:25,538] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=ngalert.migration orgID=1 t=2024-04-09T14:11:55.896956728Z level=info msg="Migrating alerts for organisation" kafka | [2024-04-09 14:12:25,538] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=ngalert.migration orgID=1 t=2024-04-09T14:11:55.89758835Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-04-09 14:12:25,545] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=ngalert.migration t=2024-04-09T14:11:55.899434234Z level=info msg="Completed alerting migration" kafka | [2024-04-09 14:12:25,545] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=ngalert.state.manager t=2024-04-09T14:11:55.924562798Z level=info msg="Running in alternative execution of Error/NoData mode" kafka | [2024-04-09 14:12:25,545] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-pap | security.protocol = PLAINTEXT grafana | logger=infra.usagestats.collector t=2024-04-09T14:11:55.926955562Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-04-09 14:12:25,545] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | security.providers = null grafana | logger=provisioning.datasources t=2024-04-09T14:11:55.930742442Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-04-09 14:12:25,545] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=provisioning.alerting t=2024-04-09T14:11:55.94574985Z level=info msg="starting to provision alerting" kafka | [2024-04-09 14:12:25,558] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=provisioning.alerting t=2024-04-09T14:11:55.945802121Z level=info msg="finished to provision alerting" kafka | [2024-04-09 14:12:25,558] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=ngalert.state.manager t=2024-04-09T14:11:55.949974818Z level=info msg="Warming state cache for startup" kafka | [2024-04-09 14:12:25,559] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-pap | ssl.cipher.suites = null grafana | logger=grafanaStorageLogger t=2024-04-09T14:11:55.959404832Z level=info msg="Storage starting" kafka | [2024-04-09 14:12:25,559] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=http.server t=2024-04-09T14:11:55.958325652Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= kafka | [2024-04-09 14:12:25,560] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=ngalert.multiorg.alertmanager t=2024-04-09T14:11:55.959666147Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-04-09 14:12:25,566] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.engine.factory.class = null grafana | logger=sqlstore.transactions t=2024-04-09T14:11:55.959793819Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-04-09 14:12:25,567] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.key.password = null grafana | logger=ngalert.state.manager t=2024-04-09T14:11:55.989306214Z level=info msg="State cache has been initialized" states=0 duration=39.326486ms kafka | [2024-04-09 14:12:25,567] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=ngalert.scheduler t=2024-04-09T14:11:55.989521028Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 kafka | [2024-04-09 14:12:25,567] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=ticker t=2024-04-09T14:11:55.989789473Z level=info msg=starting first_tick=2024-04-09T14:12:00Z kafka | [2024-04-09 14:12:25,567] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.keystore.key = null grafana | logger=provisioning.dashboard t=2024-04-09T14:11:56.018392902Z level=info msg="starting to provision dashboards" kafka | [2024-04-09 14:12:25,571] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.keystore.location = null grafana | logger=grafana.update.checker t=2024-04-09T14:11:56.06105286Z level=info msg="Update check succeeded" duration=112.422247ms kafka | [2024-04-09 14:12:25,571] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.keystore.password = null grafana | logger=sqlstore.transactions t=2024-04-09T14:11:56.073124123Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-04-09 14:12:25,571] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-pap | ssl.keystore.type = JKS grafana | logger=sqlstore.transactions t=2024-04-09T14:11:56.083732339Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" kafka | [2024-04-09 14:12:25,572] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=plugins.update.checker t=2024-04-09T14:11:56.08813062Z level=info msg="Update check succeeded" duration=142.157726ms kafka | [2024-04-09 14:12:25,572] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.provider = null grafana | logger=sqlstore.transactions t=2024-04-09T14:11:56.095271542Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" kafka | [2024-04-09 14:12:25,577] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.secure.random.implementation = null grafana | logger=sqlstore.transactions t=2024-04-09T14:11:56.115385603Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" kafka | [2024-04-09 14:12:25,578] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=provisioning.dashboard t=2024-04-09T14:11:56.30089465Z level=info msg="finished to provision dashboards" kafka | [2024-04-09 14:12:25,578] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-pap | ssl.truststore.certificates = null grafana | logger=grafana-apiserver t=2024-04-09T14:11:56.383666809Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" kafka | [2024-04-09 14:12:25,578] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.truststore.location = null grafana | logger=grafana-apiserver t=2024-04-09T14:11:56.384134958Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" kafka | [2024-04-09 14:12:25,578] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.truststore.password = null grafana | logger=infra.usagestats t=2024-04-09T14:13:15.956671481Z level=info msg="Usage stats are ready to report" kafka | [2024-04-09 14:12:25,584] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.truststore.type = JKS kafka | [2024-04-09 14:12:25,584] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | transaction.timeout.ms = 60000 kafka | [2024-04-09 14:12:25,584] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) policy-pap | transactional.id = null kafka | [2024-04-09 14:12:25,584] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-09 14:12:25,585] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | kafka | [2024-04-09 14:12:25,590] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-09T14:12:24.336+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-04-09 14:12:25,590] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-09 14:12:25,590] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-09 14:12:25,591] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671944352 kafka | [2024-04-09 14:12:25,591] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=564a2e6e-474f-4e32-b0d5-9fb32de5e450, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-04-09 14:12:25,596] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=32922d40-9f92-4f75-b434-f52361ae7b3f, alive=false, publisher=null]]: starting kafka | [2024-04-09 14:12:25,597] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-04-09T14:12:24.353+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-04-09 14:12:25,597] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-pap | acks = -1 kafka | [2024-04-09 14:12:25,597] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | auto.include.jmx.reporter = true kafka | [2024-04-09 14:12:25,597] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | batch.size = 16384 kafka | [2024-04-09 14:12:25,602] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-04-09 14:12:25,602] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | buffer.memory = 33554432 kafka | [2024-04-09 14:12:25,602] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-04-09 14:12:25,602] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | client.id = producer-2 kafka | [2024-04-09 14:12:25,602] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | compression.type = none kafka | [2024-04-09 14:12:25,607] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | connections.max.idle.ms = 540000 kafka | [2024-04-09 14:12:25,608] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true kafka | [2024-04-09 14:12:25,608] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-pap | interceptor.classes = [] kafka | [2024-04-09 14:12:25,608] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-09 14:12:25,609] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | linger.ms = 0 kafka | [2024-04-09 14:12:25,613] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | max.block.ms = 60000 kafka | [2024-04-09 14:12:25,614] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-04-09 14:12:25,614] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-pap | max.request.size = 1048576 kafka | [2024-04-09 14:12:25,614] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | metadata.max.age.ms = 300000 kafka | [2024-04-09 14:12:25,614] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-04-09 14:12:25,619] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | metric.reporters = [] kafka | [2024-04-09 14:12:25,620] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | metrics.num.samples = 2 kafka | [2024-04-09 14:12:25,620] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) policy-pap | metrics.recording.level = INFO kafka | [2024-04-09 14:12:25,620] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-04-09 14:12:25,620] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-04-09 14:12:25,629] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-04-09 14:12:25,630] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | partitioner.class = null kafka | [2024-04-09 14:12:25,630] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) policy-pap | partitioner.ignore.keys = false kafka | [2024-04-09 14:12:25,630] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-04-09 14:12:25,631] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | receive.buffer.bytes = 32768 kafka | [2024-04-09 14:12:25,638] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-04-09 14:12:25,638] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | reconnect.backoff.ms = 50 kafka | [2024-04-09 14:12:25,638] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) policy-pap | request.timeout.ms = 30000 kafka | [2024-04-09 14:12:25,638] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | retries = 2147483647 kafka | [2024-04-09 14:12:25,639] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | retry.backoff.ms = 100 kafka | [2024-04-09 14:12:25,644] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.client.callback.handler.class = null kafka | [2024-04-09 14:12:25,645] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.jaas.config = null kafka | [2024-04-09 14:12:25,645] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-04-09 14:12:25,645] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-04-09 14:12:25,646] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.kerberos.service.name = null kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | sasl.login.class = null kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | security.protocol = PLAINTEXT kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | security.providers = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | send.buffer.bytes = 131072 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | ssl.cipher.suites = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-pap | ssl.engine.factory.class = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | ssl.key.password = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | ssl.keystore.key = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-pap | ssl.keystore.location = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-pap | ssl.keystore.password = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-pap | ssl.keystore.type = JKS kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | ssl.provider = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | ssl.secure.random.implementation = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | ssl.truststore.certificates = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-pap | ssl.truststore.type = JKS kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | transaction.timeout.ms = 60000 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | transactional.id = null kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-pap | kafka | [2024-04-09 14:12:25,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-pap | [2024-04-09T14:12:24.354+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-04-09 14:12:25,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-04-09 14:12:25,675] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-04-09 14:12:25,677] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671944356 kafka | [2024-04-09 14:12:25,678] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=32922d40-9f92-4f75-b434-f52361ae7b3f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-04-09 14:12:25,678] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator kafka | [2024-04-09 14:12:25,678] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher kafka | [2024-04-09 14:12:25,678] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.357+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher kafka | [2024-04-09 14:12:25,678] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.358+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers kafka | [2024-04-09 14:12:25,678] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.361+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers kafka | [2024-04-09 14:12:25,678] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.361+00:00|INFO|TimerManager|Thread-9] timer manager update started kafka | [2024-04-09 14:12:25,678] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.363+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.363+00:00|INFO|TimerManager|Thread-10] timer manager state-change started kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.363+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.364+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.365+00:00|INFO|ServiceManager|main] Policy PAP started kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.366+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.357 seconds (process running for 11.01) kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.782+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.784+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Cluster ID: TupwFhGQQjGmvCIddVeH4w kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.784+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: TupwFhGQQjGmvCIddVeH4w kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.785+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TupwFhGQQjGmvCIddVeH4w kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.823+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.823+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: TupwFhGQQjGmvCIddVeH4w kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.893+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.893+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:24.897+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:24.964+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.008+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.084+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.116+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.189+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.235+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.294+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.341+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-09T14:12:25.411+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-04-09T14:12:25.450+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.516+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.557+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.623+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.668+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.737+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.744+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.774+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.775+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254 kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.775+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] (Re-)joining group kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.782+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Request joining group due to: need to re-join with the given member-id: consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873 kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:25.782+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:25.782+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] (Re-)joining group kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:28.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254', protocol='range'} kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:28.812+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Successfully joined group with generation Generation{generationId=1, memberId='consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873', protocol='range'} kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:28.819+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Finished assignment for group at generation 1: {consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:28.819+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:28.848+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254', protocol='range'} kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:28.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:28.850+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Successfully synced group in generation Generation{generationId=1, memberId='consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873', protocol='range'} kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:28.850+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:28.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:28.854+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:28.869+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:28.871+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:28.891+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:28.891+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:30.263+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:30.263+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:30.264+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.235+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [] kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.236+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f71a83e0-8991-48b5-bf16-0f80efc2e25f","timestampMs":1712671966197,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.236+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f71a83e0-8991-48b5-bf16-0f80efc2e25f","timestampMs":1712671966197,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.246+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.336+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.336+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting listener kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.336+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting timer policy-pap | [2024-04-09T14:12:46.337+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f35c2eaa-9447-4409-bc81-28e3583921e3, expireMs=1712671996337] kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.338+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=f35c2eaa-9447-4409-bc81-28e3583921e3, expireMs=1712671996337] kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.339+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting enqueue kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.339+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate started kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.340+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f35c2eaa-9447-4409-bc81-28e3583921e3","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.370+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f35c2eaa-9447-4409-bc81-28e3583921e3","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.370+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.379+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f35c2eaa-9447-4409-bc81-28e3583921e3","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.380+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-04-09 14:12:25,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.393+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"58bba064-d794-42f0-bfa3-6b19bdabadb3","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.394+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-04-09 14:12:25,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.394+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f35c2eaa-9447-4409-bc81-28e3583921e3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"20e7f5d3-bd9e-4c55-ba54-348ec7aba681","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-09 14:12:25,685] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"58bba064-d794-42f0-bfa3-6b19bdabadb3","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping kafka | [2024-04-09 14:12:25,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping enqueue kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping timer kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f35c2eaa-9447-4409-bc81-28e3583921e3, expireMs=1712671996337] kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping listener kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopped kafka | [2024-04-09 14:12:25,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.402+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate successful kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.402+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b start publishing next request kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.402+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange starting kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.402+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange starting listener kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange starting timer kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=cb095d6b-1806-48f9-af91-a9c5f08d2e3b, expireMs=1712671996403] kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange starting enqueue kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=cb095d6b-1806-48f9-af91-a9c5f08d2e3b, expireMs=1712671996403] kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange started kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-09T14:12:46.438+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.438+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.441+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d74c0d11-396a-45ee-aca8-c28faceef757","timestampMs":1712671966414,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f35c2eaa-9447-4409-bc81-28e3583921e3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"20e7f5d3-bd9e-4c55-ba54-348ec7aba681","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopping kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopping enqueue kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopping timer kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=cb095d6b-1806-48f9-af91-a9c5f08d2e3b, expireMs=1712671996403] kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopping listener kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopped kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange successful kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b start publishing next request kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting listener kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting timer kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=6626c05b-9878-4bec-8cb9-fdf1ff33442a, expireMs=1712671996454] kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting enqueue kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate started kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","timestampMs":1712671966430,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f35c2eaa-9447-4409-bc81-28e3583921e3 kafka | [2024-04-09 14:12:25,690] INFO [Broker id=1] Finished LeaderAndIsr request in 674ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | [2024-04-09T14:12:46.462+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.464+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-04-09T14:12:46.466+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","timestampMs":1712671966430,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-04-09 14:12:25,694] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=JIxyITR5QGSmI5P2pGX22A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=ITmYpZ6rSK-iF5o_1J2T3Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,702] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-04-09 14:12:25,703] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-04-09 14:12:25,766] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:25,779] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 8886bf5a-38da-4c7c-af7d-ca09814a22ad in Empty state. Created a new member id consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:25,794] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:25,794] INFO [GroupCoordinator 1]: Preparing to rebalance group 8886bf5a-38da-4c7c-af7d-ca09814a22ad in state PreparingRebalance with old generation 0 (__consumer_offsets-23) (reason: Adding new member consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:26,530] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5bf355d1-b191-4690-8ff2-dd6842394381 in Empty state. Created a new member id consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:26,534] INFO [GroupCoordinator 1]: Preparing to rebalance group 5bf355d1-b191-4690-8ff2-dd6842394381 in state PreparingRebalance with old generation 0 (__consumer_offsets-27) (reason: Adding new member consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:28,808] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:28,811] INFO [GroupCoordinator 1]: Stabilized group 8886bf5a-38da-4c7c-af7d-ca09814a22ad generation 1 (__consumer_offsets-23) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:28,828] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:28,830] INFO [GroupCoordinator 1]: Assignment received from leader consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873 for group 8886bf5a-38da-4c7c-af7d-ca09814a22ad for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:29,534] INFO [GroupCoordinator 1]: Stabilized group 5bf355d1-b191-4690-8ff2-dd6842394381 generation 1 (__consumer_offsets-27) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-04-09 14:12:29,547] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f for group 5bf355d1-b191-4690-8ff2-dd6842394381 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-04-09T14:12:46.466+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-04-09T14:12:46.474+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d74c0d11-396a-45ee-aca8-c28faceef757","timestampMs":1712671966414,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-09T14:12:46.474+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id cb095d6b-1806-48f9-af91-a9c5f08d2e3b policy-pap | [2024-04-09T14:12:46.477+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","timestampMs":1712671966430,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-09T14:12:46.478+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-04-09T14:12:46.480+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b2ae2d05-b707-414a-b0f7-ae1aec005c8b","timestampMs":1712671966465,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b2ae2d05-b707-414a-b0f7-ae1aec005c8b","timestampMs":1712671966465,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping enqueue policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping timer policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6626c05b-9878-4bec-8cb9-fdf1ff33442a, expireMs=1712671996454] policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping listener policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopped policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6626c05b-9878-4bec-8cb9-fdf1ff33442a policy-pap | [2024-04-09T14:12:46.486+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate successful policy-pap | [2024-04-09T14:12:46.486+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b has no more requests policy-pap | [2024-04-09T14:12:50.887+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-04-09T14:12:50.894+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-04-09T14:12:51.294+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup policy-pap | [2024-04-09T14:12:51.825+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup policy-pap | [2024-04-09T14:12:51.826+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup policy-pap | [2024-04-09T14:12:52.354+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-04-09T14:12:52.562+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-04-09T14:12:52.650+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-04-09T14:12:52.650+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-pap | [2024-04-09T14:12:52.650+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup policy-pap | [2024-04-09T14:12:52.664+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-09T14:12:52Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-09T14:12:52Z, user=policyadmin)] policy-pap | [2024-04-09T14:12:53.394+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2024-04-09T14:12:53.395+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2024-04-09T14:12:53.395+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-04-09T14:12:53.396+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2024-04-09T14:12:53.396+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup policy-pap | [2024-04-09T14:12:53.409+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-09T14:12:53Z, user=policyadmin)] policy-pap | [2024-04-09T14:12:53.714+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group defaultGroup policy-pap | [2024-04-09T14:12:53.714+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup policy-pap | [2024-04-09T14:12:53.714+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2024-04-09T14:12:53.714+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-04-09T14:12:53.715+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup policy-pap | [2024-04-09T14:12:53.715+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup policy-pap | [2024-04-09T14:12:53.727+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-09T14:12:53Z, user=policyadmin)] policy-pap | [2024-04-09T14:13:14.318+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-04-09T14:13:14.320+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2024-04-09T14:13:16.338+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f35c2eaa-9447-4409-bc81-28e3583921e3, expireMs=1712671996337] policy-pap | [2024-04-09T14:13:16.403+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=cb095d6b-1806-48f9-af91-a9c5f08d2e3b, expireMs=1712671996403] ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping kafka ... Stopping grafana ... Stopping policy-api ... Stopping compose_zookeeper_1 ... Stopping mariadb ... Stopping simulator ... Stopping prometheus ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing kafka ... Removing grafana ... Removing policy-api ... Removing policy-db-migrator ... Removing compose_zookeeper_1 ... Removing mariadb ... Removing simulator ... Removing prometheus ... Removing compose_zookeeper_1 ... done Removing simulator ... done Removing grafana ... done Removing mariadb ... done Removing prometheus ... done Removing policy-apex-pdp ... done Removing policy-pap ... done Removing policy-api ... done Removing policy-db-migrator ... done Removing kafka ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.6QreRUgV9i ]] + rsync -av /tmp/tmp.6QreRUgV9i/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 919,740 bytes received 95 bytes 1,839,670.00 bytes/sec total size is 919,195 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2081 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13453795783583277534.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8644787431641409023.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7656286616039403647.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-3z0W from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins875017452155230249.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config10716431719136852717tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12302234228367470158.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1361430092756309133.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-3z0W from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6054504162437819749.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5241280818783578738.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-3z0W from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins11014231714605064695.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-3z0W from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1638 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-21829 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 833 25115 0 6218 30877 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:31:73:a4 brd ff:ff:ff:ff:ff:ff inet 10.30.107.36/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85907sec preferred_lft 85907sec inet6 fe80::f816:3eff:fe31:73a4/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:8e:78:67:97 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21829) 04/09/24 _x86_64_ (8 CPU) 14:07:25 LINUX RESTART (8 CPU) 14:08:02 tps rtps wtps bread/s bwrtn/s 14:09:01 118.79 45.38 73.41 2031.66 26365.57 14:10:01 106.28 13.86 92.42 1126.21 28830.26 14:11:01 105.57 9.55 96.02 1688.52 41283.12 14:12:01 464.41 11.68 452.72 775.54 134189.72 14:13:01 30.51 0.38 30.13 31.46 23144.83 14:14:01 16.25 0.00 16.25 0.00 18910.78 14:15:01 64.32 0.88 63.44 45.33 21415.90 Average: 129.47 11.60 117.88 811.20 42057.38 14:08:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:09:01 30168756 31731024 2770464 8.41 67832 1806996 1445620 4.25 839104 1641680 136412 14:10:01 29892100 31727572 3047120 9.25 84488 2048636 1425832 4.20 856172 1875232 121992 14:11:01 27064936 31676632 5874284 17.83 129916 4662640 1397440 4.11 1000808 4400468 2092788 14:12:01 24794100 30737484 8145120 24.73 155660 5898008 7826208 23.03 2058944 5491624 120 14:13:01 23543692 29603760 9395528 28.52 157420 6009204 8835108 25.99 3266720 5521116 520 14:14:01 23475924 29536860 9463296 28.73 157548 6009784 8852500 26.05 3335400 5521060 212 14:15:01 25702144 31595028 7237076 21.97 158944 5858220 1574548 4.63 1319716 5374016 2948 Average: 26377379 30944051 6561841 19.92 130258 4613355 4479608 13.18 1810981 4260742 336427 14:08:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:09:01 ens3 296.17 192.87 1345.97 55.14 0.00 0.00 0.00 0.00 14:09:01 lo 1.49 1.49 0.17 0.17 0.00 0.00 0.00 0.00 14:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:10:01 ens3 42.33 31.68 606.05 6.74 0.00 0.00 0.00 0.00 14:10:01 lo 1.13 1.13 0.12 0.12 0.00 0.00 0.00 0.00 14:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:11:01 ens3 950.01 473.92 19517.70 35.15 0.00 0.00 0.00 0.00 14:11:01 br-d7c642aca212 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:11:01 lo 9.53 9.53 0.95 0.95 0.00 0.00 0.00 0.00 14:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:12:01 veth41c52b9 0.05 0.32 0.00 0.02 0.00 0.00 0.00 0.00 14:12:01 vethe5e45da 1.77 1.85 0.17 0.18 0.00 0.00 0.00 0.00 14:12:01 ens3 376.25 209.12 12289.98 14.69 0.00 0.00 0.00 0.00 14:12:01 br-d7c642aca212 0.63 0.52 0.05 0.29 0.00 0.00 0.00 0.00 14:13:01 veth41c52b9 0.50 0.53 0.05 1.32 0.00 0.00 0.00 0.00 14:13:01 vethe5e45da 15.70 13.31 1.97 1.98 0.00 0.00 0.00 0.00 14:13:01 ens3 6.00 4.75 1.45 1.58 0.00 0.00 0.00 0.00 14:13:01 br-d7c642aca212 1.82 2.08 1.75 1.69 0.00 0.00 0.00 0.00 14:14:01 veth41c52b9 0.57 0.58 0.05 1.52 0.00 0.00 0.00 0.00 14:14:01 vethe5e45da 13.83 9.33 1.05 1.34 0.00 0.00 0.00 0.00 14:14:01 ens3 1.65 1.38 0.34 0.27 0.00 0.00 0.00 0.00 14:14:01 br-d7c642aca212 0.85 0.83 0.11 0.08 0.00 0.00 0.00 0.00 14:15:01 ens3 44.91 38.19 66.40 29.13 0.00 0.00 0.00 0.00 14:15:01 lo 35.48 35.48 6.28 6.28 0.00 0.00 0.00 0.00 14:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 245.21 135.85 4840.87 20.30 0.00 0.00 0.00 0.00 Average: lo 4.53 4.53 0.85 0.85 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21829) 04/09/24 _x86_64_ (8 CPU) 14:07:25 LINUX RESTART (8 CPU) 14:08:02 CPU %user %nice %system %iowait %steal %idle 14:09:01 all 11.07 0.00 0.90 3.31 0.04 84.68 14:09:01 0 19.76 0.00 1.62 1.67 0.03 76.93 14:09:01 1 16.12 0.00 0.95 8.84 0.05 74.04 14:09:01 2 4.13 0.00 0.68 3.68 0.05 91.46 14:09:01 3 14.24 0.00 0.83 1.53 0.03 83.36 14:09:01 4 15.68 0.00 1.11 0.87 0.05 82.29 14:09:01 5 6.23 0.00 0.73 0.46 0.03 92.55 14:09:01 6 10.65 0.00 0.92 0.36 0.03 88.05 14:09:01 7 1.78 0.00 0.37 9.04 0.02 88.79 14:10:01 all 8.14 0.00 0.55 4.22 0.03 87.07 14:10:01 0 2.46 0.00 0.42 7.45 0.03 89.64 14:10:01 1 3.62 0.00 0.38 11.72 0.03 84.24 14:10:01 2 20.40 0.00 1.27 11.09 0.05 67.19 14:10:01 3 12.15 0.00 0.92 1.39 0.03 85.50 14:10:01 4 14.14 0.00 0.60 1.42 0.03 83.80 14:10:01 5 7.77 0.00 0.50 0.33 0.02 91.38 14:10:01 6 3.74 0.00 0.20 0.25 0.02 95.79 14:10:01 7 0.87 0.00 0.12 0.12 0.02 98.88 14:11:01 all 10.39 0.00 4.16 4.09 0.08 81.28 14:11:01 0 9.78 0.00 3.60 1.40 0.07 85.16 14:11:01 1 10.08 0.00 4.59 11.32 0.09 73.92 14:11:01 2 9.46 0.00 2.92 10.66 0.07 76.89 14:11:01 3 12.28 0.00 5.56 1.05 0.09 81.01 14:11:01 4 9.04 0.00 3.57 0.20 0.12 87.07 14:11:01 5 9.89 0.00 4.31 0.22 0.07 85.51 14:11:01 6 11.75 0.00 3.60 1.05 0.07 83.53 14:11:01 7 10.86 0.00 5.14 6.80 0.08 77.10 14:12:01 all 11.23 0.00 3.69 10.95 0.07 74.07 14:12:01 0 13.01 0.00 3.71 10.58 0.05 72.65 14:12:01 1 9.99 0.00 3.60 6.32 0.05 80.04 14:12:01 2 12.56 0.00 4.28 29.31 0.10 53.76 14:12:01 3 9.75 0.00 4.01 11.12 0.07 75.06 14:12:01 4 11.22 0.00 2.99 0.50 0.05 85.23 14:12:01 5 13.41 0.00 3.04 1.76 0.07 81.71 14:12:01 6 10.33 0.00 3.63 5.43 0.05 80.55 14:12:01 7 9.57 0.00 4.26 22.72 0.08 63.37 14:13:01 all 23.71 0.00 2.17 1.15 0.08 72.89 14:13:01 0 24.58 0.00 2.22 0.05 0.08 73.06 14:13:01 1 29.16 0.00 2.61 0.05 0.07 68.11 14:13:01 2 25.99 0.00 2.28 2.96 0.05 68.72 14:13:01 3 16.11 0.00 1.65 0.05 0.08 82.10 14:13:01 4 23.12 0.00 2.13 4.49 0.08 70.19 14:13:01 5 33.08 0.00 3.18 0.03 0.08 63.62 14:13:01 6 16.81 0.00 1.82 1.59 0.10 79.68 14:13:01 7 20.80 0.00 1.47 0.00 0.07 77.66 14:14:01 all 1.20 0.00 0.20 1.32 0.04 97.24 14:14:01 0 1.25 0.00 0.18 0.00 0.05 98.52 14:14:01 1 1.65 0.00 0.40 0.00 0.07 97.88 14:14:01 2 0.83 0.00 0.13 0.00 0.03 99.00 14:14:01 3 0.99 0.00 0.23 0.03 0.07 98.68 14:14:01 4 1.43 0.00 0.18 10.30 0.03 88.05 14:14:01 5 1.08 0.00 0.13 0.02 0.03 98.73 14:14:01 6 1.40 0.00 0.17 0.13 0.02 98.28 14:14:01 7 0.93 0.00 0.10 0.10 0.03 98.83 14:15:01 all 3.02 0.00 0.65 1.49 0.05 94.79 14:15:01 0 1.80 0.00 0.77 0.43 0.03 96.97 14:15:01 1 1.22 0.00 0.72 0.05 0.05 97.96 14:15:01 2 2.79 0.00 0.62 0.10 0.03 96.46 14:15:01 3 1.95 0.00 0.60 0.32 0.05 97.08 14:15:01 4 2.85 0.00 0.43 8.95 0.05 87.72 14:15:01 5 1.79 0.00 0.73 0.02 0.05 97.41 14:15:01 6 1.57 0.00 0.58 1.09 0.03 96.72 14:15:01 7 10.20 0.00 0.77 0.84 0.05 88.15 Average: all 9.81 0.00 1.75 3.78 0.05 84.60 Average: 0 10.34 0.00 1.78 3.08 0.05 84.74 Average: 1 10.24 0.00 1.89 5.45 0.06 82.37 Average: 2 10.89 0.00 1.74 8.23 0.06 79.09 Average: 3 9.61 0.00 1.96 2.20 0.06 86.16 Average: 4 11.04 0.00 1.57 3.85 0.06 83.49 Average: 5 10.46 0.00 1.80 0.41 0.05 87.28 Average: 6 8.02 0.00 1.56 1.41 0.05 88.96 Average: 7 7.86 0.00 1.74 5.62 0.05 84.73