Started by upstream project "policy-pap-master-merge-java" build number 345 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137265 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-6650 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-gnrrgAsB5RIM/agent.2133 SSH_AGENT_PID=2135 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_3688725985997811433.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_3688725985997811433.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision dd836dc2d2bd379fba19b395c912d32f1bc7ee38 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=30 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" > git rev-list --no-walk dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4948147167366691383.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-pit5 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-pit5/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-pit5/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.2 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.44 botocore==1.34.44 bs4==0.0.2 cachetools==5.3.2 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.1 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.42 google-auth==2.28.0 httplib2==0.22.0 identify==2.5.35 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.5 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.2.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.2.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.1 python-swiftclient==4.4.0 pytz==2024.1 PyYAML==6.0.1 referencing==0.33.0 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.2 typing_extensions==4.9.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins5842781131184482745.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins17853064712713074651.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.0gOomcmeSd ++ echo ROBOT_VENV=/tmp/tmp.0gOomcmeSd +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.0gOomcmeSd ++ source /tmp/tmp.0gOomcmeSd/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.0gOomcmeSd +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.0gOomcmeSd/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.0gOomcmeSd) ' '!=' x ']' +++ PS1='(tmp.0gOomcmeSd) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.0gOomcmeSd/src/onap ++ rm -rf /tmp/tmp.0gOomcmeSd/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ grep -q Linux ++ uname ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.0gOomcmeSd/bin/activate + '[' -z /tmp/tmp.0gOomcmeSd/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.0gOomcmeSd/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.0gOomcmeSd ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.0gOomcmeSd/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.0gOomcmeSd) ' ++ '[' 'x(tmp.0gOomcmeSd) ' '!=' x ']' ++ PS1='(tmp.0gOomcmeSd) (tmp.0gOomcmeSd) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.l4Kz55UmBK + cd /tmp/tmp.l4Kz55UmBK + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:191ea80d58976372d6ed1c0c58381553b1e255dde7f5cbf6557b43cee2dc0cb8 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:6dc9b5d15d5c92b51ee9067496c5209e4419813b605f45e6e3ce7c61cbd0cf2d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:83ef526db4e13f0a2bb480c243664be5d32f31e0113221dd08f72a39c815ca19 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:95218046ca6b26d15ab4d82d39156a22b80b66d8b0ba39e77bb8669c622da231 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:e564d9bc98287b5baa8fc9562a6b0ef4a26864d537b17d2e62843b316ad48fa0 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Creating simulator ... Creating compose_zookeeper_1 ... Creating prometheus ... Creating mariadb ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating simulator ... done Creating prometheus ... done Creating grafana ... Creating compose_zookeeper_1 ... done Creating kafka ... Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done Creating grafana ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 11 seconds policy-pap Up 12 seconds kafka Up 13 seconds grafana Up 10 seconds policy-api Up 17 seconds compose_zookeeper_1 Up 14 seconds mariadb Up 19 seconds simulator Up 16 seconds prometheus Up 15 seconds NAMES STATUS policy-apex-pdp Up 16 seconds policy-pap Up 17 seconds kafka Up 18 seconds grafana Up 15 seconds policy-api Up 22 seconds compose_zookeeper_1 Up 19 seconds mariadb Up 24 seconds simulator Up 21 seconds prometheus Up 20 seconds NAMES STATUS policy-apex-pdp Up 21 seconds policy-pap Up 22 seconds kafka Up 23 seconds grafana Up 20 seconds policy-api Up 27 seconds compose_zookeeper_1 Up 24 seconds mariadb Up 29 seconds simulator Up 26 seconds prometheus Up 25 seconds NAMES STATUS policy-apex-pdp Up 26 seconds policy-pap Up 27 seconds kafka Up 28 seconds grafana Up 25 seconds policy-api Up 32 seconds compose_zookeeper_1 Up 29 seconds mariadb Up 34 seconds simulator Up 31 seconds prometheus Up 30 seconds NAMES STATUS policy-apex-pdp Up 31 seconds policy-pap Up 32 seconds kafka Up 33 seconds grafana Up 30 seconds policy-api Up 37 seconds compose_zookeeper_1 Up 34 seconds mariadb Up 39 seconds simulator Up 36 seconds prometheus Up 35 seconds NAMES STATUS policy-apex-pdp Up 36 seconds policy-pap Up 37 seconds kafka Up 38 seconds grafana Up 35 seconds policy-api Up 42 seconds compose_zookeeper_1 Up 39 seconds mariadb Up 44 seconds simulator Up 41 seconds prometheus Up 40 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats ++ uname -s + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 09:49:42 up 4 min, 0 users, load average: 3.18, 1.38, 0.54 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.0 us, 2.9 sy, 0.0 ni, 78.9 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 36 seconds policy-pap Up 37 seconds kafka Up 38 seconds grafana Up 35 seconds policy-api Up 43 seconds compose_zookeeper_1 Up 39 seconds mariadb Up 44 seconds simulator Up 41 seconds prometheus Up 40 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 921d4ecfbc80 policy-apex-pdp 1.00% 182.2MiB / 31.41GiB 0.57% 9.55kB / 8.89kB 0B / 0B 48 79858661e78b policy-pap 1.60% 589.3MiB / 31.41GiB 1.83% 32.5kB / 61.8kB 0B / 153MB 61 0e55bf7c996a kafka 4.80% 393.8MiB / 31.41GiB 1.22% 75.2kB / 79kB 0B / 512kB 85 da92a016ee73 grafana 0.03% 57.9MiB / 31.41GiB 0.18% 18.7kB / 3.44kB 0B / 24.2MB 19 2cbd2f80f66a policy-api 0.11% 487.2MiB / 31.41GiB 1.51% 1MB / 737kB 0B / 0B 57 47106728afb8 compose_zookeeper_1 0.12% 97.93MiB / 31.41GiB 0.30% 56kB / 48.9kB 0B / 426kB 60 624401bbb71c mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 996kB / 1.19MB 11.1MB / 65.4MB 36 8af0f3e29b08 simulator 0.07% 121.1MiB / 31.41GiB 0.38% 1.23kB / 0B 0B / 0B 76 a161cef3d62d prometheus 0.00% 19.54MiB / 31.41GiB 0.06% 55.7kB / 1.87kB 0B / 0B 13 + echo + cd /tmp/tmp.l4Kz55UmBK + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.l4Kz55UmBK/output.xml Log: /tmp/tmp.l4Kz55UmBK/log.html Report: /tmp/tmp.l4Kz55UmBK/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 09:51:32 up 6 min, 0 users, load average: 0.75, 1.08, 0.53 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.1 us, 2.2 sy, 0.0 ni, 83.3 id, 3.3 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes prometheus Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 921d4ecfbc80 policy-apex-pdp 0.62% 191.6MiB / 31.41GiB 0.60% 57.4kB / 92.3kB 0B / 0B 52 79858661e78b policy-pap 0.53% 532MiB / 31.41GiB 1.65% 2.33MB / 804kB 0B / 153MB 65 0e55bf7c996a kafka 1.88% 401.4MiB / 31.41GiB 1.25% 245kB / 220kB 0B / 610kB 85 da92a016ee73 grafana 0.02% 65.89MiB / 31.41GiB 0.20% 19.5kB / 4.48kB 0B / 24.2MB 19 2cbd2f80f66a policy-api 0.11% 548.7MiB / 31.41GiB 1.71% 2.49MB / 1.26MB 0B / 0B 57 47106728afb8 compose_zookeeper_1 0.11% 97.97MiB / 31.41GiB 0.30% 59kB / 50.6kB 0B / 426kB 60 624401bbb71c mariadb 0.03% 103.2MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11.1MB / 65.8MB 28 8af0f3e29b08 simulator 0.07% 121.2MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 78 a161cef3d62d prometheus 0.00% 24.92MiB / 31.41GiB 0.08% 167kB / 11kB 0B / 0B 13 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, compose_zookeeper_1, mariadb, simulator, prometheus zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-02-19 09:49:06,191] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,199] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,199] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,199] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,199] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,201] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-19 09:49:06,202] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-19 09:49:06,202] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-19 09:49:06,202] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-02-19 09:49:06,203] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-02-19 09:49:06,203] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,204] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,204] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,204] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,204] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-19 09:49:06,204] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-02-19 09:49:06,216] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-02-19 09:49:06,219] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-19 09:49:06,219] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-19 09:49:06,222] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-19 09:49:06,233] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,233] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,233] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,233] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,233] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,233] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,233] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,234] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,234] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,234] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,236] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,236] INFO Server environment:host.name=47106728afb8 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,236] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,236] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,236] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,236] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,237] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,237] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,237] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,237] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,237] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,237] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,237] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,238] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,238] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,238] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,238] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,238] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,239] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,239] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,239] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,239] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,239] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,239] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,239] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,240] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-02-19 09:49:06,241] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,242] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,243] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-19 09:49:06,243] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-19 09:49:06,244] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-19 09:49:06,244] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-19 09:49:06,244] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-19 09:49:06,244] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-19 09:49:06,244] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-19 09:49:06,244] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-19 09:49:06,247] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,247] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,248] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-19 09:49:06,248] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-19 09:49:06,248] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,269] INFO Logging initialized @537ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-02-19 09:49:06,364] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-19 09:49:06,364] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-19 09:49:06,385] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-19 09:49:06,416] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-19 09:49:06,416] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-19 09:49:06,420] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-19 09:49:06,425] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-02-19 09:49:06,436] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-19 09:49:06,453] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-02-19 09:49:06,453] INFO Started @721ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-19 09:49:06,453] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-02-19 09:49:06,460] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-19 09:49:06,461] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-19 09:49:06,463] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-19 09:49:06,465] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-19 09:49:06,482] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-19 09:49:06,482] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-19 09:49:06,483] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-19 09:49:06,483] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-19 09:49:06,489] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-02-19 09:49:06,489] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-19 09:49:06,493] INFO Snapshot loaded in 9 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-19 09:49:06,494] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-19 09:49:06,494] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-19 09:49:06,505] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-02-19 09:49:06,505] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-02-19 09:49:06,524] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-02-19 09:49:06,525] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-02-19 09:49:07,939] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) grafana | logger=settings t=2024-02-19T09:49:06.799886616Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-19T09:49:06Z grafana | logger=settings t=2024-02-19T09:49:06.800215922Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-02-19T09:49:06.800234882Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-02-19T09:49:06.800239342Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-02-19T09:49:06.800242622Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-02-19T09:49:06.800246292Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-19T09:49:06.800249672Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-19T09:49:06.800274222Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-02-19T09:49:06.800289853Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-02-19T09:49:06.800296793Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-02-19T09:49:06.800302223Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-19T09:49:06.800307033Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-19T09:49:06.800314293Z level=info msg=Target target=[all] grafana | logger=settings t=2024-02-19T09:49:06.800327583Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-02-19T09:49:06.800366524Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-02-19T09:49:06.800381954Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-02-19T09:49:06.800385124Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-02-19T09:49:06.800387964Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-02-19T09:49:06.800390934Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-02-19T09:49:06.800932982Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-02-19T09:49:06.800965832Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-02-19T09:49:06.815060688Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-02-19T09:49:06.816440618Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-02-19T09:49:06.817584236Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.142898ms grafana | logger=migrator t=2024-02-19T09:49:06.822278373Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-02-19T09:49:06.823059845Z level=info msg="Migration successfully executed" id="create user table" duration=780.722µs grafana | logger=migrator t=2024-02-19T09:49:06.828636357Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-02-19T09:49:06.829672942Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.041965ms grafana | logger=migrator t=2024-02-19T09:49:06.832791297Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-02-19T09:49:06.833378366Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=581.978µs grafana | logger=migrator t=2024-02-19T09:49:06.836851627Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-02-19T09:49:06.837434646Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=583.059µs grafana | logger=migrator t=2024-02-19T09:49:06.841568256Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-02-19T09:49:06.842178475Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=609.749µs grafana | logger=migrator t=2024-02-19T09:49:06.846107482Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-02-19T09:49:06.848906763Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.801211ms grafana | logger=migrator t=2024-02-19T09:49:06.855004792Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-02-19T09:49:06.855812454Z level=info msg="Migration successfully executed" id="create user table v2" duration=804.572µs grafana | logger=migrator t=2024-02-19T09:49:06.862531023Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-02-19T09:49:06.863403495Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=872.182µs grafana | logger=migrator t=2024-02-19T09:49:06.867585787Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-02-19T09:49:06.868422879Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=836.742µs grafana | logger=migrator t=2024-02-19T09:49:06.872111442Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-02-19T09:49:06.872828942Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=715.37µs grafana | logger=migrator t=2024-02-19T09:49:06.876627719Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-02-19T09:49:06.877660874Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.032435ms grafana | logger=migrator t=2024-02-19T09:49:06.883731102Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-02-19T09:49:06.886253769Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=2.522057ms grafana | logger=migrator t=2024-02-19T09:49:06.890077556Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-02-19T09:49:06.890117826Z level=info msg="Migration successfully executed" id="Update user table charset" duration=41.22µs grafana | logger=migrator t=2024-02-19T09:49:06.896137394Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-02-19T09:49:06.897382962Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.245298ms grafana | logger=migrator t=2024-02-19T09:49:06.904622698Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-02-19T09:49:06.905198206Z level=info msg="Migration successfully executed" id="Add missing user data" duration=571.878µs grafana | logger=migrator t=2024-02-19T09:49:06.910181169Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-02-19T09:49:06.912724917Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=2.389525ms grafana | logger=migrator t=2024-02-19T09:49:06.916177977Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-02-19T09:49:06.917037339Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=859.282µs grafana | logger=migrator t=2024-02-19T09:49:06.921266141Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-02-19T09:49:06.922727352Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.460651ms grafana | logger=migrator t=2024-02-19T09:49:06.926041911Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-02-19T09:49:06.936408183Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.365652ms grafana | logger=migrator t=2024-02-19T09:49:06.942022354Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-02-19T09:49:06.942941189Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=917.974µs grafana | logger=migrator t=2024-02-19T09:49:06.949823319Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-02-19T09:49:06.950634601Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=811.182µs grafana | logger=migrator t=2024-02-19T09:49:06.954515518Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-02-19T09:49:06.95538609Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=870.432µs grafana | logger=migrator t=2024-02-19T09:49:06.960190931Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-02-19T09:49:06.961066133Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=874.372µs grafana | logger=migrator t=2024-02-19T09:49:06.965519209Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-02-19T09:49:06.966483203Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=962.564µs grafana | logger=migrator t=2024-02-19T09:49:06.972081864Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-02-19T09:49:06.972170266Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=90.542µs grafana | logger=migrator t=2024-02-19T09:49:06.976798044Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-02-19T09:49:06.977644036Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=846.622µs grafana | logger=migrator t=2024-02-19T09:49:06.981912038Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-02-19T09:49:06.98270853Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=797.092µs grafana | logger=migrator t=2024-02-19T09:49:06.991445208Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-02-19T09:49:06.992649685Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.204737ms grafana | logger=migrator t=2024-02-19T09:49:06.997635418Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-02-19T09:49:06.998941377Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.304449ms grafana | logger=migrator t=2024-02-19T09:49:07.002391016Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-19T09:49:07.006115966Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.72487ms grafana | logger=migrator t=2024-02-19T09:49:07.009459739Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-02-19T09:49:07.01030734Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=849.971µs grafana | logger=migrator t=2024-02-19T09:49:07.014729871Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-02-19T09:49:07.015684872Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=953.671µs grafana | logger=migrator t=2024-02-19T09:49:07.019082453Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-02-19T09:49:07.02056017Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.476347ms grafana | logger=migrator t=2024-02-19T09:49:07.02565872Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-02-19T09:49:07.026571361Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=912.611µs grafana | logger=migrator t=2024-02-19T09:49:07.034265452Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-02-19T09:49:07.035671989Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.406087ms grafana | logger=migrator t=2024-02-19T09:49:07.039536504Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-02-19T09:49:07.040170322Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=489.786µs grafana | logger=migrator t=2024-02-19T09:49:07.043203257Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-02-19T09:49:07.043854576Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=650.779µs grafana | logger=migrator t=2024-02-19T09:49:07.048095045Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-02-19T09:49:07.048613361Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=518.086µs grafana | logger=migrator t=2024-02-19T09:49:07.053236106Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-02-19T09:49:07.053941904Z level=info msg="Migration successfully executed" id="create star table" duration=705.208µs grafana | logger=migrator t=2024-02-19T09:49:07.057140012Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-02-19T09:49:07.058015263Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=872.501µs grafana | logger=migrator t=2024-02-19T09:49:07.064192365Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-02-19T09:49:07.06547452Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.281545ms grafana | logger=migrator t=2024-02-19T09:49:07.075139665Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.2:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.9:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-02-19T09:49:38.214+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-02-19T09:49:38.453+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 806dde56-8d7b-4023-b37b-d9545bfe5732 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-19T09:49:07.076676082Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.535287ms grafana | logger=migrator t=2024-02-19T09:49:07.080244285Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-02-19T09:49:07.081003134Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=760.059µs grafana | logger=migrator t=2024-02-19T09:49:07.084341923Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-02-19T09:49:07.085168653Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=823.659µs grafana | logger=migrator t=2024-02-19T09:49:07.088189428Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-02-19T09:49:07.089064259Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=874.561µs grafana | logger=migrator t=2024-02-19T09:49:07.093896265Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-02-19T09:49:07.094746246Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=849.751µs grafana | logger=migrator t=2024-02-19T09:49:07.098189527Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-02-19T09:49:07.098217037Z level=info msg="Migration successfully executed" id="Update org table charset" duration=33.561µs grafana | logger=migrator t=2024-02-19T09:49:07.101648637Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-02-19T09:49:07.101674467Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=26.98µs grafana | logger=migrator t=2024-02-19T09:49:07.104999356Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-02-19T09:49:07.105259289Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=259.633µs grafana | logger=migrator t=2024-02-19T09:49:07.112447225Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-02-19T09:49:07.113254964Z level=info msg="Migration successfully executed" id="create dashboard table" duration=807.439µs grafana | logger=migrator t=2024-02-19T09:49:07.121889376Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-02-19T09:49:07.123314653Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.424967ms grafana | logger=migrator t=2024-02-19T09:49:07.126897815Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-02-19T09:49:07.128828638Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.929793ms grafana | logger=migrator t=2024-02-19T09:49:07.132312229Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-02-19T09:49:07.133061568Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=748.829µs grafana | logger=migrator t=2024-02-19T09:49:07.138470382Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-02-19T09:49:07.139326632Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=855.991µs grafana | logger=migrator t=2024-02-19T09:49:07.143724163Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-02-19T09:49:07.144896888Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.172955ms grafana | logger=migrator t=2024-02-19T09:49:07.14848963Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-02-19T09:49:07.15540115Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.91108ms grafana | logger=migrator t=2024-02-19T09:49:07.159712031Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-02-19T09:49:07.160217897Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=505.526µs grafana | logger=migrator t=2024-02-19T09:49:07.167516083Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-02-19T09:49:07.168920249Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.403936ms grafana | logger=migrator t=2024-02-19T09:49:07.17243655Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-02-19T09:49:07.173658185Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.221205ms grafana | logger=migrator t=2024-02-19T09:49:07.178008596Z level=info msg="Executing migration" id="copy dashboard v1 to v2" policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-02-19T09:49:38.631+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-02-19T09:49:38.631+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-02-19T09:49:38.631+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336178629 policy-apex-pdp | [2024-02-19T09:49:38.633+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-1, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-02-19T09:49:38.647+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-02-19T09:49:38.647+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-02-19T09:49:38.651+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=806dde56-8d7b-4023-b37b-d9545bfe5732, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-02-19T09:49:38.691+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2 policy-apex-pdp | client.rack = grafana | logger=migrator t=2024-02-19T09:49:07.178431731Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=401.405µs grafana | logger=migrator t=2024-02-19T09:49:07.181527908Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-02-19T09:49:07.182439288Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=911.2µs grafana | logger=migrator t=2024-02-19T09:49:07.18605545Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-02-19T09:49:07.186258683Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=201.202µs grafana | logger=migrator t=2024-02-19T09:49:07.222221305Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-02-19T09:49:07.22519277Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.970395ms grafana | logger=migrator t=2024-02-19T09:49:07.231925939Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-02-19T09:49:07.234941534Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.014045ms grafana | logger=migrator t=2024-02-19T09:49:07.238471606Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.241204278Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.733492ms grafana | logger=migrator t=2024-02-19T09:49:07.245567189Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.246452699Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=885.53µs grafana | logger=migrator t=2024-02-19T09:49:07.249823309Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.253683624Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.831594ms grafana | logger=migrator t=2024-02-19T09:49:07.25848487Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 806dde56-8d7b-4023-b37b-d9545bfe5732 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-19T09:49:07.260002719Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.504678ms grafana | logger=migrator t=2024-02-19T09:49:07.267277183Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-02-19T09:49:07.268147634Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=870.441µs grafana | logger=migrator t=2024-02-19T09:49:07.274675931Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-02-19T09:49:07.274728061Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=53.45µs grafana | logger=migrator t=2024-02-19T09:49:07.278351373Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-02-19T09:49:07.278393654Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=43.921µs grafana | logger=migrator t=2024-02-19T09:49:07.283098149Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.286162665Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.065226ms grafana | logger=migrator t=2024-02-19T09:49:07.292032924Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.294003527Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.969893ms grafana | logger=migrator t=2024-02-19T09:49:07.297322116Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.299264119Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.941263ms grafana | logger=migrator t=2024-02-19T09:49:07.302678209Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.304686533Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.011154ms grafana | logger=migrator t=2024-02-19T09:49:07.311562803Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.311873917Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=310.794µs grafana | logger=migrator t=2024-02-19T09:49:07.317149569Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-02-19T09:49:07.318245981Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.095682ms grafana | logger=migrator t=2024-02-19T09:49:07.322076407Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-02-19T09:49:07.323335301Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.258984ms grafana | logger=migrator t=2024-02-19T09:49:07.327633282Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-02-19T09:49:07.327658882Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=26.6µs grafana | logger=migrator t=2024-02-19T09:49:07.331682789Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-02-19T09:49:07.33253861Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=853.781µs grafana | logger=migrator t=2024-02-19T09:49:07.336727788Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-02-19T09:49:07.337885152Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.156474ms grafana | logger=migrator t=2024-02-19T09:49:07.342219733Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-19T09:49:07.349989354Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.770391ms grafana | logger=migrator t=2024-02-19T09:49:07.355246116Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-02-19T09:49:07.355799363Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=552.967µs grafana | logger=migrator t=2024-02-19T09:49:07.358955529Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-02-19T09:49:07.359594357Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=638.467µs grafana | logger=migrator t=2024-02-19T09:49:07.363686865Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-02-19T09:49:07.365069021Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.373036ms grafana | logger=migrator t=2024-02-19T09:49:07.368721013Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-02-19T09:49:07.369316231Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=594.508µs grafana | logger=migrator t=2024-02-19T09:49:07.373389739Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-02-19T09:49:07.374060896Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=666.387µs grafana | logger=migrator t=2024-02-19T09:49:07.378202905Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-02-19T09:49:07.380621514Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.417999ms grafana | logger=migrator t=2024-02-19T09:49:07.38625524Z level=info msg="Executing migration" id="Add index for dashboard_title" policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-02-19T09:49:38.702+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-02-19T09:49:38.702+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-02-19T09:49:38.702+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336178701 policy-apex-pdp | [2024-02-19T09:49:38.702+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-02-19T09:49:38.703+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=40d5ca6f-86bb-40c3-b0d2-562aae21e14c, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-02-19T09:49:38.733+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-19T09:49:07.387199681Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=929.831µs grafana | logger=migrator t=2024-02-19T09:49:07.391493062Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-02-19T09:49:07.391758905Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=265.343µs grafana | logger=migrator t=2024-02-19T09:49:07.394325895Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-02-19T09:49:07.394585058Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=258.903µs grafana | logger=migrator t=2024-02-19T09:49:07.398334891Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-02-19T09:49:07.399205251Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=870.12µs grafana | logger=migrator t=2024-02-19T09:49:07.403693874Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-02-19T09:49:07.407378107Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.683413ms grafana | logger=migrator t=2024-02-19T09:49:07.41096743Z level=info msg="Executing migration" id="create data_source table" mariadb | 2024-02-19 09:48:57+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-19 09:48:57+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-02-19 09:48:57+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-19 09:48:57+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-02-19 9:48:57 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-19 9:48:57 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-19 9:48:57 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-02-19 09:48:59+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-02-19 09:48:59+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-02-19 09:48:59+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-02-19 9:48:59 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-19 9:48:59 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-19 9:48:59 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-19 9:48:59 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-19 9:48:59 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-02-19 9:48:59 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-19 9:48:59 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-19 9:48:59 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-19 9:48:59 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-19 9:48:59 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-02-19 09:49:00+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-02-19 09:49:01+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-02-19 09:49:01+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-02-19 09:49:01+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-02-19 09:49:01+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-02-19 09:49:02+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-02-19 9:49:02 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-02-19 9:49:02 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-02-19 9:49:02 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-02-19 9:49:02 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-02-19 9:49:02 0 [Note] InnoDB: Buffer pool(s) dump completed at 240219 9:49:02 mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Shutdown completed; log sequence number 331999; transaction id 298 mariadb | 2024-02-19 9:49:03 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-02-19 09:49:03+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-02-19 09:49:03+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-02-19 9:49:03 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-19 9:49:03 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-19 9:49:03 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-19 9:49:03 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: log sequence number 331999; transaction id 299 mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-02-19 9:49:03 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-19 9:49:03 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-19 9:49:03 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-02-19 9:49:03 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-02-19 9:49:03 0 [Note] Server socket created on IP: '::'. mariadb | 2024-02-19 9:49:03 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-02-19 9:49:03 0 [Note] InnoDB: Buffer pool(s) load completed at 240219 9:49:03 mariadb | 2024-02-19 9:49:04 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-02-19 9:49:04 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-02-19 9:49:04 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-02-19 9:49:05 52 [Warning] Aborted connection 52 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-02-19T09:49:38.744+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-02-19T09:49:38.763+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-02-19T09:49:38.763+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-02-19T09:49:38.763+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336178763 policy-apex-pdp | [2024-02-19T09:49:38.764+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=40d5ca6f-86bb-40c3-b0d2-562aae21e14c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-02-19T09:49:38.764+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-02-19T09:49:38.764+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-02-19T09:49:38.768+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-02-19T09:49:38.768+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-02-19T09:49:38.772+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-02-19T09:49:38.773+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-02-19T09:49:38.773+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-02-19T09:49:38.773+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=806dde56-8d7b-4023-b37b-d9545bfe5732, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 policy-apex-pdp | [2024-02-19T09:49:38.773+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=806dde56-8d7b-4023-b37b-d9545bfe5732, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-02-19T09:49:38.774+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-02-19T09:49:38.798+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-02-19T09:49:38.800+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"115484fc-62b7-4434-a532-4935ac8d503c","timestampMs":1708336178773,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-19T09:49:38.977+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-02-19T09:49:38.978+00:00|INFO|ServiceManager|main] service manager starting grafana | logger=migrator t=2024-02-19T09:49:07.412497877Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.534437ms grafana | logger=migrator t=2024-02-19T09:49:07.416468295Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-02-19T09:49:07.417310714Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=841.929µs grafana | logger=migrator t=2024-02-19T09:49:07.421721845Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-02-19T09:49:07.422636647Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=914.422µs grafana | logger=migrator t=2024-02-19T09:49:07.42719754Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-02-19T09:49:07.42800942Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=811.71µs grafana | logger=migrator t=2024-02-19T09:49:07.432062247Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-02-19T09:49:07.433058648Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=995.951µs grafana | logger=migrator t=2024-02-19T09:49:07.436241017Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-02-19T09:49:07.447243116Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.002159ms grafana | logger=migrator t=2024-02-19T09:49:07.450705776Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-02-19T09:49:07.451353184Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=641.548µs grafana | logger=migrator t=2024-02-19T09:49:07.455180058Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-02-19T09:49:07.4561111Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=930.562µs grafana | logger=migrator t=2024-02-19T09:49:07.462503664Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-02-19T09:49:07.46385066Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.346676ms grafana | logger=migrator t=2024-02-19T09:49:07.469849321Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-02-19T09:49:07.470737011Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=887.34µs grafana | logger=migrator t=2024-02-19T09:49:07.474248662Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-02-19T09:49:07.478175819Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.926667ms grafana | logger=migrator t=2024-02-19T09:49:07.482949234Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-02-19T09:49:07.485333842Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.384378ms grafana | logger=migrator t=2024-02-19T09:49:07.490056358Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-02-19T09:49:07.490091929Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=35.86µs grafana | logger=migrator t=2024-02-19T09:49:07.493426767Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-02-19T09:49:07.493730591Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=305.284µs grafana | logger=migrator t=2024-02-19T09:49:07.497182121Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-02-19T09:49:07.501161938Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.979257ms grafana | logger=migrator t=2024-02-19T09:49:07.507598153Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-02-19T09:49:07.507903917Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=305.334µs grafana | logger=migrator t=2024-02-19T09:49:07.512527232Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-02-19T09:49:07.512879065Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=353.394µs grafana | logger=migrator t=2024-02-19T09:49:07.517130825Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-02-19T09:49:07.521213293Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.28664ms grafana | logger=migrator t=2024-02-19T09:49:07.525469693Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-02-19T09:49:07.525705727Z level=info msg="Migration successfully executed" id="Update uid value" duration=235.734µs grafana | logger=migrator t=2024-02-19T09:49:07.529959786Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-02-19T09:49:07.530990398Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.029762ms grafana | logger=migrator t=2024-02-19T09:49:07.535036066Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-02-19T09:49:07.536966059Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.930024ms grafana | logger=migrator t=2024-02-19T09:49:07.541965526Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-02-19T09:49:07.542844307Z level=info msg="Migration successfully executed" id="create api_key table" duration=878.401µs grafana | logger=migrator t=2024-02-19T09:49:07.54738425Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-02-19T09:49:07.549000059Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.614979ms grafana | logger=migrator t=2024-02-19T09:49:07.553434181Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-02-19T09:49:07.554897019Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.466538ms grafana | logger=migrator t=2024-02-19T09:49:07.558505591Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-02-19T09:49:07.559789426Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.283195ms grafana | logger=migrator t=2024-02-19T09:49:07.564732884Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-02-19T09:49:07.565608444Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=875.31µs grafana | logger=migrator t=2024-02-19T09:49:07.607238943Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-02-19T09:49:07.60867907Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.439626ms policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.8) policy-api | policy-api | [2024-02-19T09:49:12.687+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-02-19T09:49:12.690+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-02-19T09:49:14.608+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-02-19T09:49:14.701+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 82 ms. Found 6 JPA repository interfaces. policy-api | [2024-02-19T09:49:15.177+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-02-19T09:49:15.179+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-02-19T09:49:15.964+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-02-19T09:49:15.975+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-02-19T09:49:15.977+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-02-19T09:49:15.977+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-api | [2024-02-19T09:49:16.071+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-02-19T09:49:16.071+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3310 ms policy-api | [2024-02-19T09:49:16.557+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-02-19T09:49:16.667+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-02-19T09:49:16.674+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-02-19T09:49:16.739+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-02-19T09:49:17.176+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-02-19T09:49:17.201+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-02-19T09:49:17.307+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7636823f policy-api | [2024-02-19T09:49:17.310+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-02-19T09:49:17.369+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-api | [2024-02-19T09:49:17.371+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-api | [2024-02-19T09:49:19.470+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-02-19T09:49:19.474+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-02-19T09:49:20.575+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-02-19T09:49:21.451+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-02-19T09:49:22.706+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-02-19T09:49:22.987+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@c7a7d3, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@79462469, org.springframework.security.web.context.SecurityContextHolderFilter@673ade3d, org.springframework.security.web.header.HeaderWriterFilter@6e2ab1f4, org.springframework.security.web.authentication.logout.LogoutFilter@39d666e0, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@547a79cd, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@4529b266, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6aca85da, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3341ba8e, org.springframework.security.web.access.ExceptionTranslationFilter@495fa126, org.springframework.security.web.access.intercept.AuthorizationFilter@206d4413] policy-api | [2024-02-19T09:49:23.952+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-02-19T09:49:24.096+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-02-19T09:49:24.122+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-02-19T09:49:24.140+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.313 seconds (process running for 13.028) policy-api | [2024-02-19T09:49:39.948+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-02-19T09:49:39.948+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-02-19T09:49:39.950+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-api | [2024-02-19T09:49:45.335+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: policy-api | [] grafana | logger=migrator t=2024-02-19T09:49:07.615982606Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-02-19T09:49:07.617394622Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.411846ms grafana | logger=migrator t=2024-02-19T09:49:07.62236767Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-02-19T09:49:07.632960485Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.590525ms grafana | logger=migrator t=2024-02-19T09:49:07.636498876Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-02-19T09:49:07.637133993Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=634.917µs grafana | logger=migrator t=2024-02-19T09:49:07.64032047Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-02-19T09:49:07.64111227Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=791.62µs grafana | logger=migrator t=2024-02-19T09:49:07.646840118Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-02-19T09:49:07.648012181Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.171623ms grafana | logger=migrator t=2024-02-19T09:49:07.654547708Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-02-19T09:49:07.655953624Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.408886ms grafana | logger=migrator t=2024-02-19T09:49:07.659547156Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-02-19T09:49:07.65991335Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=365.984µs grafana | logger=migrator t=2024-02-19T09:49:07.664273882Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-02-19T09:49:07.664802138Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=528.316µs grafana | logger=migrator t=2024-02-19T09:49:07.667929274Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-02-19T09:49:07.667962385Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=27.751µs grafana | logger=migrator t=2024-02-19T09:49:07.671369126Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-02-19T09:49:07.673921625Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.551949ms grafana | logger=migrator t=2024-02-19T09:49:07.678010303Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-02-19T09:49:07.680508163Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.49732ms grafana | logger=migrator t=2024-02-19T09:49:07.685576312Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-02-19T09:49:07.685772905Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=192.582µs grafana | logger=migrator t=2024-02-19T09:49:07.69474313Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-02-19T09:49:07.698875748Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.131657ms grafana | logger=migrator t=2024-02-19T09:49:07.702285738Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-02-19T09:49:07.705237493Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.951325ms grafana | logger=migrator t=2024-02-19T09:49:07.709486472Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-02-19T09:49:07.710198541Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=711.959µs grafana | logger=migrator t=2024-02-19T09:49:07.713452439Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-02-19T09:49:07.714036346Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=583.937µs grafana | logger=migrator t=2024-02-19T09:49:07.717946412Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-02-19T09:49:07.71867153Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=724.878µs grafana | logger=migrator t=2024-02-19T09:49:07.721750997Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-02-19T09:49:07.722607607Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=856.38µs grafana | logger=migrator t=2024-02-19T09:49:07.728780009Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-02-19T09:49:07.729612208Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=832.839µs grafana | logger=migrator t=2024-02-19T09:49:07.736370338Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-02-19T09:49:07.737797245Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.426646ms grafana | logger=migrator t=2024-02-19T09:49:07.741127123Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-02-19T09:49:07.741253645Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=127.442µs grafana | logger=migrator t=2024-02-19T09:49:07.745330133Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-02-19T09:49:07.745359494Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=30.241µs grafana | logger=migrator t=2024-02-19T09:49:07.748706573Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-02-19T09:49:07.751373694Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.667231ms grafana | logger=migrator t=2024-02-19T09:49:07.756757177Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-02-19T09:49:07.75954615Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.788633ms grafana | logger=migrator t=2024-02-19T09:49:07.762809588Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-02-19T09:49:07.762904729Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=95.771µs grafana | logger=migrator t=2024-02-19T09:49:07.766109117Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-02-19T09:49:07.766811946Z level=info msg="Migration successfully executed" id="create quota table v1" duration=702.419µs policy-apex-pdp | [2024-02-19T09:49:38.978+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-02-19T09:49:38.978+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-02-19T09:49:38.994+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-02-19T09:49:38.994+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-02-19T09:49:38.994+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. prometheus | ts=2024-02-19T09:49:01.480Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-02-19T09:49:01.480Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" prometheus | ts=2024-02-19T09:49:01.480Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-02-19T09:49:01.480Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-02-19T09:49:01.480Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-02-19T09:49:01.480Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-02-19T09:49:01.482Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-02-19T09:49:01.483Z caller=main.go:1039 level=info msg="Starting TSDB ..." prometheus | ts=2024-02-19T09:49:01.488Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-02-19T09:49:01.489Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-02-19T09:49:01.491Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-02-19T09:49:01.491Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.66µs prometheus | ts=2024-02-19T09:49:01.491Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-02-19T09:49:01.492Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-02-19T09:49:01.492Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=47.99µs wal_replay_duration=305.614µs wbl_replay_duration=180ns total_replay_duration=472.466µs prometheus | ts=2024-02-19T09:49:01.494Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-02-19T09:49:01.494Z caller=main.go:1063 level=info msg="TSDB started" prometheus | ts=2024-02-19T09:49:01.494Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-02-19T09:49:01.495Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=891.241µs db_storage=1.7µs remote_storage=1.5µs web_handler=980ns query_engine=1.64µs scrape=202.332µs scrape_sd=113.792µs notify=23.86µs notify_sd=41.78µs rules=2.82µs tracing=9.381µs prometheus | ts=2024-02-19T09:49:01.495Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-02-19T09:49:01.495Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." grafana | logger=migrator t=2024-02-19T09:49:07.771398929Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-02-19T09:49:07.77317893Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.779541ms grafana | logger=migrator t=2024-02-19T09:49:07.780305453Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-02-19T09:49:07.780351954Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=49.171µs grafana | logger=migrator t=2024-02-19T09:49:07.783676133Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-02-19T09:49:07.784538343Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=862.22µs grafana | logger=migrator t=2024-02-19T09:49:07.787561049Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-02-19T09:49:07.788726972Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.165393ms grafana | logger=migrator t=2024-02-19T09:49:07.793102204Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-02-19T09:49:07.796945458Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.845244ms grafana | logger=migrator t=2024-02-19T09:49:07.800726003Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-02-19T09:49:07.800785044Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=64.071µs grafana | logger=migrator t=2024-02-19T09:49:07.804557808Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-02-19T09:49:07.805369897Z level=info msg="Migration successfully executed" id="create session table" duration=811.689µs grafana | logger=migrator t=2024-02-19T09:49:07.809286824Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-02-19T09:49:07.809416485Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=129.151µs grafana | logger=migrator t=2024-02-19T09:49:07.813426272Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-02-19T09:49:07.813584684Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=164.322µs grafana | logger=migrator t=2024-02-19T09:49:07.817986756Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-02-19T09:49:07.819013638Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.026792ms grafana | logger=migrator t=2024-02-19T09:49:07.82345326Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-02-19T09:49:07.824521112Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.067782ms grafana | logger=migrator t=2024-02-19T09:49:07.828111924Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-02-19T09:49:07.828238657Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=134.642µs grafana | logger=migrator t=2024-02-19T09:49:07.832376704Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-02-19T09:49:07.832427575Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=51.741µs grafana | logger=migrator t=2024-02-19T09:49:07.835584852Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-02-19T09:49:07.838579567Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.994545ms grafana | logger=migrator t=2024-02-19T09:49:07.842356342Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-02-19T09:49:07.845435418Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.081886ms grafana | logger=migrator t=2024-02-19T09:49:07.852390039Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-02-19T09:49:07.852493Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=98.091µs grafana | logger=migrator t=2024-02-19T09:49:07.856589609Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-02-19T09:49:07.85668891Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=99.861µs grafana | logger=migrator t=2024-02-19T09:49:07.859801366Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-02-19T09:49:07.861215902Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.409686ms grafana | logger=migrator t=2024-02-19T09:49:07.867549038Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-02-19T09:49:07.867634999Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=86.671µs grafana | logger=migrator t=2024-02-19T09:49:07.870900577Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-02-19T09:49:07.874218696Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.317759ms grafana | logger=migrator t=2024-02-19T09:49:07.877316742Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-02-19T09:49:07.877482514Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=165.982µs grafana | logger=migrator t=2024-02-19T09:49:07.881229867Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-02-19T09:49:07.884450646Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.217319ms grafana | logger=migrator t=2024-02-19T09:49:07.892899435Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-02-19T09:49:07.896515427Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.614142ms grafana | logger=migrator t=2024-02-19T09:49:07.899939807Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-02-19T09:49:07.900076609Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=137.842µs grafana | logger=migrator t=2024-02-19T09:49:07.90360725Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-02-19T09:49:07.904769884Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.160824ms grafana | logger=migrator t=2024-02-19T09:49:07.908932383Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-02-19T09:49:07.909893995Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=961.292µs grafana | logger=migrator t=2024-02-19T09:49:07.913121432Z level=info msg="Executing migration" id="create alert table v1" kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | SLF4J: Class path contains multiple SLF4J bindings. kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] kafka | [2024-02-19 09:49:07,866] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,866] INFO Client environment:host.name=0e55bf7c996a (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,866] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,866] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,866] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,867] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,867] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,867] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,867] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,867] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,867] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,867] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,867] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,868] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,868] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,868] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,868] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,868] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,872] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@2fd6b6c7 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:07,876] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-02-19 09:49:07,882] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-02-19 09:49:07,890] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-19 09:49:07,911] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-19 09:49:07,911] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) policy-apex-pdp | [2024-02-19T09:49:38.995+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-02-19T09:49:39.127+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Cluster ID: -JbPgT5mQDm7ZrQFfE5t_Q policy-apex-pdp | [2024-02-19T09:49:39.127+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: -JbPgT5mQDm7ZrQFfE5t_Q policy-apex-pdp | [2024-02-19T09:49:39.129+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-02-19T09:49:39.130+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-02-19T09:49:39.135+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] (Re-)joining group policy-apex-pdp | [2024-02-19T09:49:39.149+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Request joining group due to: need to re-join with the given member-id: consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2-5e8229a3-40aa-439c-bc1c-c983f4eb0b77 policy-apex-pdp | [2024-02-19T09:49:39.151+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-02-19T09:49:39.151+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] (Re-)joining group policy-apex-pdp | [2024-02-19T09:49:39.709+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-02-19T09:49:39.711+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-02-19T09:49:42.159+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Successfully joined group with generation Generation{generationId=1, memberId='consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2-5e8229a3-40aa-439c-bc1c-c983f4eb0b77', protocol='range'} policy-apex-pdp | [2024-02-19T09:49:42.170+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Finished assignment for group at generation 1: {consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2-5e8229a3-40aa-439c-bc1c-c983f4eb0b77=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-02-19T09:49:42.178+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Successfully synced group in generation Generation{generationId=1, memberId='consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2-5e8229a3-40aa-439c-bc1c-c983f4eb0b77', protocol='range'} policy-apex-pdp | [2024-02-19T09:49:42.179+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-02-19T09:49:42.184+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-02-19T09:49:42.193+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-02-19T09:49:42.205+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2, groupId=806dde56-8d7b-4023-b37b-d9545bfe5732] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-02-19T09:49:56.178+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.4 - policyadmin [19/Feb/2024:09:49:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.49.1" policy-apex-pdp | [2024-02-19T09:49:58.773+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2632223f-979c-46ca-b8b2-6772ebd25f34","timestampMs":1708336198772,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-19T09:49:58.799+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2632223f-979c-46ca-b8b2-6772ebd25f34","timestampMs":1708336198772,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-19T09:49:58.801+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-19T09:49:58.957+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"102dc6a6-6858-4db0-95fe-a28908d45b01","timestampMs":1708336198898,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:58.966+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-02-19T09:49:58.966+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4b869b48-595d-46b2-866d-f92ac2000e9f","timestampMs":1708336198966,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-19T09:49:58.968+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"102dc6a6-6858-4db0-95fe-a28908d45b01","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"2a703d74-e531-4901-bb06-fdd53ef492c4","timestampMs":1708336198968,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:58.980+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4b869b48-595d-46b2-866d-f92ac2000e9f","timestampMs":1708336198966,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-19T09:49:58.981+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-19T09:49:58.986+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"102dc6a6-6858-4db0-95fe-a28908d45b01","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"2a703d74-e531-4901-bb06-fdd53ef492c4","timestampMs":1708336198968,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:58.987+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-19T09:49:59.032+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"79d938ec-9d1c-4da5-9307-1df889903ca3","timestampMs":1708336198899,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:59.035+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"79d938ec-9d1c-4da5-9307-1df889903ca3","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"cae7ba32-f300-4525-a0b4-4f77bf4189d3","timestampMs":1708336199035,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:59.049+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"79d938ec-9d1c-4da5-9307-1df889903ca3","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"cae7ba32-f300-4525-a0b4-4f77bf4189d3","timestampMs":1708336199035,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:59.049+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-19T09:49:59.079+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"27c77e08-86d8-4d6c-a377-69c796e75a58","timestampMs":1708336199051,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:59.080+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"27c77e08-86d8-4d6c-a377-69c796e75a58","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"18578a30-f6a6-427e-92fb-ae0f1664710f","timestampMs":1708336199080,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:59.091+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"27c77e08-86d8-4d6c-a377-69c796e75a58","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"18578a30-f6a6-427e-92fb-ae0f1664710f","timestampMs":1708336199080,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-19T09:49:59.092+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-19T09:50:56.089+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.4 - policyadmin [19/Feb/2024:09:50:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.49.1" kafka | [2024-02-19 09:49:07,921] INFO Socket connection established, initiating session, client: /172.17.0.9:33958, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-19 09:49:07,962] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003b2d20000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-19 09:49:08,108] INFO Session: 0x1000003b2d20000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:08,108] INFO EventThread shut down for session: 0x1000003b2d20000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-02-19 09:49:08,879] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-02-19 09:49:09,271] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-02-19 09:49:09,348] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-02-19 09:49:09,350] INFO starting (kafka.server.KafkaServer) kafka | [2024-02-19 09:49:09,350] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-02-19 09:49:09,364] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-19 09:49:09,368] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:host.name=0e55bf7c996a (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-19T09:49:07.914179434Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.056092ms grafana | logger=migrator t=2024-02-19T09:49:07.918793599Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-02-19T09:49:07.920467848Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.670839ms grafana | logger=migrator t=2024-02-19T09:49:07.925043082Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-02-19T09:49:07.925962182Z level=info msg="Migration successfully executed" id="add index alert state" duration=918.91µs grafana | logger=migrator t=2024-02-19T09:49:07.931053622Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-02-19T09:49:07.932056024Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.001702ms grafana | logger=migrator t=2024-02-19T09:49:07.935813808Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-02-19T09:49:07.936432705Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=618.517µs grafana | logger=migrator t=2024-02-19T09:49:07.93941642Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-02-19T09:49:07.940534534Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.117724ms grafana | logger=migrator t=2024-02-19T09:49:07.944689382Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-02-19T09:49:07.945536602Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=848.34µs grafana | logger=migrator t=2024-02-19T09:49:07.950209087Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-02-19T09:49:07.965177522Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.973255ms grafana | logger=migrator t=2024-02-19T09:49:08.002943212Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-02-19T09:49:08.004190161Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.246039ms grafana | logger=migrator t=2024-02-19T09:49:08.009453881Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-02-19T09:49:08.010807753Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.359883ms grafana | logger=migrator t=2024-02-19T09:49:08.015537925Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-02-19T09:49:08.01592851Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=383.735µs grafana | logger=migrator t=2024-02-19T09:49:08.019713734Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-02-19T09:49:08.020337399Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=622.835µs grafana | logger=migrator t=2024-02-19T09:49:08.024645609Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-02-19T09:49:08.025425746Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=779.567µs grafana | logger=migrator t=2024-02-19T09:49:08.028394373Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-02-19T09:49:08.032358618Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.961855ms grafana | logger=migrator t=2024-02-19T09:49:08.035966351Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-02-19T09:49:08.039834036Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.867015ms grafana | logger=migrator t=2024-02-19T09:49:08.044045585Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-02-19T09:49:08.047890989Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.844874ms grafana | logger=migrator t=2024-02-19T09:49:08.051141859Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-02-19T09:49:08.054955103Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.812435ms grafana | logger=migrator t=2024-02-19T09:49:08.06016941Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-02-19T09:49:08.061404912Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.234942ms grafana | logger=migrator t=2024-02-19T09:49:08.066632289Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-02-19T09:49:08.06666111Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=29.371µs grafana | logger=migrator t=2024-02-19T09:49:08.069887419Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-02-19T09:49:08.069952259Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=64.95µs grafana | logger=migrator t=2024-02-19T09:49:08.073804634Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-02-19T09:49:08.074996135Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.191241ms grafana | logger=migrator t=2024-02-19T09:49:08.080768907Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-02-19T09:49:08.082771225Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=2.005788ms grafana | logger=migrator t=2024-02-19T09:49:08.086339858Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-02-19T09:49:08.087313976Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=975.868µs grafana | logger=migrator t=2024-02-19T09:49:08.090636376Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-02-19T09:49:08.091609626Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=973.18µs policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.2:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.9:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.8) policy-pap | policy-pap | [2024-02-19T09:49:26.300+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 32 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-02-19T09:49:26.302+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-02-19T09:49:28.371+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-02-19T09:49:28.521+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 136 ms. Found 7 JPA repository interfaces. policy-pap | [2024-02-19T09:49:28.999+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-02-19T09:49:29.000+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-02-19T09:49:29.817+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-02-19T09:49:29.831+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-02-19T09:49:29.834+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-02-19T09:49:29.834+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-pap | [2024-02-19T09:49:29.965+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-02-19T09:49:29.966+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3559 ms policy-pap | [2024-02-19T09:49:30.457+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-02-19T09:49:30.559+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-pap | [2024-02-19T09:49:30.563+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-pap | [2024-02-19T09:49:30.607+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2024-02-19T09:49:30.984+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2024-02-19T09:49:31.007+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-02-19T09:49:31.146+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@36a6bea6 policy-pap | [2024-02-19T09:49:31.149+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-02-19T09:49:31.180+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-pap | [2024-02-19T09:49:31.182+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-pap | [2024-02-19T09:49:33.314+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2024-02-19T09:49:33.318+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-02-19T09:49:33.937+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-02-19T09:49:34.443+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-02-19T09:49:34.570+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-02-19T09:49:34.902+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-5cceb518-7b72-41da-b42c-3c8775105be3-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-19 09:49:09,368] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,368] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,369] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,369] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,369] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,369] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,371] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-19 09:49:09,375] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-02-19 09:49:09,381] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-02-19 09:49:00,894 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-02-19 09:49:00,957 INFO org.onap.policy.models.simulators starting simulator | 2024-02-19 09:49:00,957 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-02-19 09:49:01,161 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-02-19 09:49:01,162 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-02-19 09:49:01,317 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-02-19 09:49:01,330 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-19 09:49:01,333 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-19 09:49:01,340 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-02-19 09:49:01,401 INFO Session workerName=node0 simulator | 2024-02-19 09:49:02,053 INFO Using GSON for REST calls simulator | 2024-02-19 09:49:02,129 INFO Started o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE} simulator | 2024-02-19 09:49:02,138 INFO Started A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-02-19 09:49:02,145 INFO Started Server@2a2c13a8{STARTING}[11.0.20,sto=0] @1752ms simulator | 2024-02-19 09:49:02,146 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4188 ms. simulator | 2024-02-19 09:49:02,153 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-02-19 09:49:02,156 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-02-19 09:49:02,157 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-19 09:49:02,157 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-19 09:49:02,159 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-02-19 09:49:02,165 INFO Session workerName=node0 simulator | 2024-02-19 09:49:02,228 INFO Using GSON for REST calls simulator | 2024-02-19 09:49:02,239 INFO Started o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE} simulator | 2024-02-19 09:49:02,241 INFO Started SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-02-19 09:49:02,241 INFO Started Server@62452cc9{STARTING}[11.0.20,sto=0] @1848ms simulator | 2024-02-19 09:49:02,241 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4916 ms. simulator | 2024-02-19 09:49:02,243 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-02-19 09:49:02,248 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-02-19 09:49:02,249 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-19 09:49:02,253 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-19 09:49:02,254 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-02-19 09:49:02,259 INFO Session workerName=node0 simulator | 2024-02-19 09:49:02,317 INFO Using GSON for REST calls simulator | 2024-02-19 09:49:02,331 INFO Started o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE} simulator | 2024-02-19 09:49:02,332 INFO Started SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-02-19 09:49:02,333 INFO Started Server@488eb7f2{STARTING}[11.0.20,sto=0] @1939ms simulator | 2024-02-19 09:49:02,333 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4918 ms. simulator | 2024-02-19 09:49:02,335 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-02-19 09:49:02,339 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-02-19 09:49:02,340 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-19 09:49:02,341 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-19 09:49:02,342 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-02-19 09:49:02,347 INFO Session workerName=node0 simulator | 2024-02-19 09:49:02,394 INFO Using GSON for REST calls simulator | 2024-02-19 09:49:02,404 INFO Started o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE} simulator | 2024-02-19 09:49:02,406 INFO Started VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-02-19 09:49:02,406 INFO Started Server@6035b93b{STARTING}[11.0.20,sto=0] @2013ms simulator | 2024-02-19 09:49:02,406 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4935 ms. simulator | 2024-02-19 09:49:02,407 INFO org.onap.policy.models.simulators started policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql kafka | [2024-02-19 09:49:09,383] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-19 09:49:09,388] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-19 09:49:09,396] INFO Socket connection established, initiating session, client: /172.17.0.9:33960, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-19 09:49:09,405] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003b2d20001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-19 09:49:09,410] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-19 09:49:09,757] INFO Cluster ID = -JbPgT5mQDm7ZrQFfE5t_Q (kafka.server.KafkaServer) kafka | [2024-02-19 09:49:09,762] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-02-19 09:49:09,814] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 5cceb518-7b72-41da-b42c-3c8775105be3 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | group.coordinator.new.enable = false policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null kafka | group.coordinator.threads = 1 policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-19T09:49:08.095618972Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" policy-db-migrator | kafka | group.initial.rebalance.delay.ms = 3000 policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-19T09:49:08.09655338Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=933.688µs policy-db-migrator | kafka | group.max.session.timeout.ms = 1800000 policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-02-19T09:49:08.10096158Z level=info msg="Executing migration" id="Add for to alert table" policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | group.max.size = 2147483647 policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-19T09:49:08.105073457Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.110767ms policy-db-migrator | -------------- kafka | group.min.session.timeout.ms = 6000 policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-19T09:49:08.10983067Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | initial.broker.registration.timeout.ms = 60000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | kafka | kafka.metrics.polling.interval.secs = 10 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.113852616Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.021706ms kafka | kafka.metrics.reporters = [] policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql grafana | logger=migrator t=2024-02-19T09:49:08.117985104Z level=info msg="Executing migration" id="Update uid column values in alert_notification" kafka | leader.imbalance.check.interval.seconds = 300 policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.118294227Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=308.983µs kafka | leader.imbalance.per.broker.percentage = 10 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-19T09:49:08.122096641Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.122886029Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=790.288µs kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.127045056Z level=info msg="Executing migration" id="Remove unique index org_id_name" kafka | log.cleaner.backoff.ms = 15000 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.127852803Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=807.887µs kafka | log.cleaner.dedupe.buffer.size = 134217728 policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql grafana | logger=migrator t=2024-02-19T09:49:08.131748189Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" kafka | log.cleaner.delete.retention.ms = 86400000 policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.135463892Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.710993ms kafka | log.cleaner.enable = true policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-19T09:49:08.138331878Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" kafka | log.cleaner.io.buffer.load.factor = 0.9 policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.13856267Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=231.212µs grafana | logger=migrator t=2024-02-19T09:49:08.143545685Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-pap | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.145980547Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=2.462463ms grafana | logger=migrator t=2024-02-19T09:49:08.152303553Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-pap | send.buffer.bytes = 131072 policy-db-migrator | kafka | log.cleaner.io.buffer.size = 524288 policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-02-19T09:49:08.152986589Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=682.596µs policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-19T09:49:08.157698692Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-db-migrator | -------------- kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-19T09:49:08.157886103Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=178.521µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | log.cleaner.min.cleanable.ratio = 0.5 policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-02-19T09:49:08.160952291Z level=info msg="Executing migration" id="create annotation table v5" policy-db-migrator | -------------- kafka | log.cleaner.min.compaction.lag.ms = 0 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-19T09:49:08.162201263Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.248432ms policy-db-migrator | kafka | log.cleaner.threads = 1 policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-19T09:49:08.165512072Z level=info msg="Executing migration" id="add index annotation 0 v3" policy-db-migrator | kafka | log.cleanup.policy = [delete] policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-19T09:49:08.16637959Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=866.858µs policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql kafka | log.dir = /tmp/kafka-logs policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-19T09:49:08.171576507Z level=info msg="Executing migration" id="add index annotation 1 v3" policy-db-migrator | -------------- kafka | log.dirs = /var/lib/kafka/data policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-19T09:49:08.172653486Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.076039ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) kafka | log.flush.interval.messages = 9223372036854775807 policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-19T09:49:08.176475641Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-db-migrator | -------------- kafka | log.flush.interval.ms = null policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-02-19T09:49:08.17744447Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=967.909µs policy-db-migrator | kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-02-19T09:49:08.181028472Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-db-migrator | kafka | log.flush.scheduler.interval.ms = 9223372036854775807 policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-02-19T09:49:08.182070151Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.040979ms policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-19T09:49:08.185757854Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-db-migrator | -------------- kafka | log.index.interval.bytes = 4096 policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-19T09:49:08.186801734Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.04357ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | log.index.size.max.bytes = 10485760 policy-pap | ssl.provider = null grafana | logger=migrator t=2024-02-19T09:49:08.191055732Z level=info msg="Executing migration" id="Update annotation table charset" policy-db-migrator | -------------- kafka | log.local.retention.bytes = -2 policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-19T09:49:08.191083722Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=31.93µs policy-db-migrator | kafka | log.local.retention.ms = -2 policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-02-19T09:49:08.196866064Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-db-migrator | kafka | log.message.downconversion.enable = true policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-02-19T09:49:08.204083379Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=7.215085ms policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql kafka | log.message.format.version = 3.0-IV1 policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-19T09:49:08.208152266Z level=info msg="Executing migration" id="Drop category_id index" policy-db-migrator | -------------- kafka | log.message.timestamp.after.max.ms = 9223372036854775807 policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-02-19T09:49:08.208753001Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=600.645µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | log.message.timestamp.before.max.ms = 9223372036854775807 policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-19T09:49:08.211749728Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-db-migrator | -------------- kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 grafana | logger=migrator t=2024-02-19T09:49:08.214612253Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.861735ms grafana | logger=migrator t=2024-02-19T09:49:08.218840202Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.219480377Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=632.525µs grafana | logger=migrator t=2024-02-19T09:49:08.22416048Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-pap | policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.22539179Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.2303ms grafana | logger=migrator t=2024-02-19T09:49:08.228544719Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-pap | [2024-02-19T09:49:35.116+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql grafana | logger=migrator t=2024-02-19T09:49:08.229328806Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=784.007µs grafana | logger=migrator t=2024-02-19T09:49:08.232366134Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-pap | [2024-02-19T09:49:35.116+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.249072604Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.70641ms grafana | logger=migrator t=2024-02-19T09:49:08.252858817Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-pap | [2024-02-19T09:49:35.116+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336175114 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-19T09:49:08.253442242Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=584.005µs grafana | logger=migrator t=2024-02-19T09:49:08.256856734Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-pap | [2024-02-19T09:49:35.119+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-1, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.257677601Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.323233ms grafana | logger=migrator t=2024-02-19T09:49:08.260740668Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-pap | [2024-02-19T09:49:35.120+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.261013151Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=272.103µs grafana | logger=migrator t=2024-02-19T09:49:08.264865275Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-pap | allow.auto.create.topics = true policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.26539501Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=529.925µs grafana | logger=migrator t=2024-02-19T09:49:08.268743901Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql grafana | logger=migrator t=2024-02-19T09:49:08.269019903Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=277.202µs grafana | logger=migrator t=2024-02-19T09:49:08.272457364Z level=info msg="Executing migration" id="Add created time to annotation table" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.2776232Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.166496ms grafana | logger=migrator t=2024-02-19T09:49:08.282220922Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-pap | auto.offset.reset = latest policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-19T09:49:08.286197587Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.976295ms grafana | logger=migrator t=2024-02-19T09:49:08.290736838Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-02-19T09:49:08.291620576Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=883.268µs policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.295369719Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-02-19T09:49:08.297336867Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.978158ms policy-pap | check.crcs = true policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.300627217Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-02-19T09:49:08.300862779Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=235.542µs policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.303639704Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-02-19T09:49:08.307916623Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.276549ms policy-pap | client.id = consumer-policy-pap-2 policy-db-migrator | > upgrade 0450-pdpgroup.sql grafana | logger=migrator t=2024-02-19T09:49:08.312924548Z level=info msg="Executing migration" id="Add index for epoch_end" kafka | log.message.timestamp.type = CreateTime policy-pap | client.rack = policy-db-migrator | -------------- kafka | log.preallocate = false kafka | log.retention.bytes = -1 policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | -------------- kafka | log.retention.minutes = null kafka | log.retention.ms = null policy-pap | enable.auto.commit = true policy-db-migrator | kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 policy-pap | exclude.internal.topics = true policy-db-migrator | kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | -------------- kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 policy-pap | fetch.min.bytes = 1 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = policy-pap | group.id = policy-pap policy-db-migrator | -------------- kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 policy-pap | group.instance.id = null policy-db-migrator | kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 policy-pap | interceptor.classes = [] policy-db-migrator | > upgrade 0470-pdp.sql kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] policy-pap | isolation.level = read_uncommitted policy-db-migrator | -------------- kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | kafka | node.id = 1 kafka | num.io.threads = 8 policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | > upgrade 0480-pdpstatistics.sql kafka | num.network.threads = 3 grafana | logger=migrator t=2024-02-19T09:49:08.313826396Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=901.498µs policy-pap | max.poll.records = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.316794222Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" kafka | num.partitions = 1 policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.316967623Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=167.381µs kafka | num.recovery.threads.per.data.dir = 1 policy-pap | metric.reporters = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.319830919Z level=info msg="Executing migration" id="Move region to single row" kafka | num.replica.alter.log.dirs.threads = null policy-pap | metrics.num.samples = 2 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.320454265Z level=info msg="Migration successfully executed" id="Move region to single row" duration=623.256µs kafka | num.replica.fetchers = 1 policy-pap | metrics.recording.level = INFO policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.324217319Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" kafka | offset.metadata.max.bytes = 4096 policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql grafana | logger=migrator t=2024-02-19T09:49:08.325587291Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.374042ms kafka | offsets.commit.required.acks = -1 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.333687634Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" kafka | offsets.commit.timeout.ms = 5000 policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-19T09:49:08.334547821Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=860.317µs kafka | offsets.load.buffer.size = 5242880 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-19T09:49:08.340130892Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" kafka | offsets.retention.check.interval.ms = 600000 policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-19T09:49:08.341546355Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.414553ms kafka | offsets.retention.minutes = 10080 policy-db-migrator | policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-19T09:49:08.347039584Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" kafka | offsets.topic.compression.codec = 0 policy-db-migrator | policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-19T09:49:08.348416926Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.376862ms kafka | offsets.topic.num.partitions = 50 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-19T09:49:08.35212119Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" kafka | offsets.topic.replication.factor = 1 policy-db-migrator | -------------- policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-19T09:49:08.353404171Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.284001ms kafka | offsets.topic.segment.bytes = 104857600 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-19T09:49:08.355953164Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-02-19T09:49:08.357477837Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.515843ms kafka | password.encoder.iterations = 4096 policy-db-migrator | policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-02-19T09:49:08.360278622Z level=info msg="Executing migration" id="Increase tags column to length 4096" kafka | password.encoder.key.length = 128 policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-19T09:49:08.360480515Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=206.033µs kafka | password.encoder.keyfactory.algorithm = null policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-02-19T09:49:08.36322781Z level=info msg="Executing migration" id="create test_data table" kafka | password.encoder.old.secret = null policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-02-19T09:49:08.364097017Z level=info msg="Migration successfully executed" id="create test_data table" duration=868.757µs kafka | password.encoder.secret = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-02-19T09:49:08.367038664Z level=info msg="Executing migration" id="create dashboard_version table v1" kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-02-19T09:49:08.367890551Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=848.497µs kafka | process.roles = [] policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-19T09:49:08.413193628Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" kafka | producer.id.expiration.check.interval.ms = 600000 policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-19T09:49:08.41560797Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=2.415982ms kafka | producer.id.expiration.ms = 86400000 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-19T09:49:08.420552904Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" kafka | producer.purgatory.purge.interval.requests = 1000 policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-02-19T09:49:08.421637204Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.199761ms kafka | queued.max.request.bytes = -1 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-19T09:49:08.42901199Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" kafka | queued.max.requests = 500 policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-19T09:49:08.429381084Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=368.834µs kafka | quota.window.num = 11 policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-19T09:49:08.432775134Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" kafka | quota.window.size.seconds = 1 policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-19T09:49:08.43330376Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=528.406µs kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-19T09:49:08.436501138Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" kafka | remote.log.manager.task.interval.ms = 30000 policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-19T09:49:08.43670158Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=199.992µs policy-pap | sasl.oauthbearer.expected.issuer = null kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.440236201Z level=info msg="Executing migration" id="create team table" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | remote.log.manager.task.retry.jitter = 0.2 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.441723825Z level=info msg="Migration successfully executed" id="create team table" duration=1.495434ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | remote.log.manager.thread.pool.size = 10 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.447073862Z level=info msg="Executing migration" id="add index team.org_id" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql grafana | logger=migrator t=2024-02-19T09:49:08.448165833Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.091311ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.451990087Z level=info msg="Executing migration" id="add unique index team_org_id_name" policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | remote.log.metadata.manager.class.path = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.452995357Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.00436ms policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.456815881Z level=info msg="Executing migration" id="Add column uid in team" policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | remote.log.metadata.manager.listener.name = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.461546293Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.729553ms policy-pap | security.protocol = PLAINTEXT kafka | remote.log.reader.max.pending.tasks = 100 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.468457555Z level=info msg="Executing migration" id="Update uid column values in team" policy-pap | security.providers = null kafka | remote.log.reader.threads = 10 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-02-19T09:49:08.468745007Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=287.422µs policy-pap | send.buffer.bytes = 131072 kafka | remote.log.storage.manager.class.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.474410488Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-pap | session.timeout.ms = 45000 kafka | remote.log.storage.manager.class.path = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.475492279Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.081581ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | remote.log.storage.manager.impl.prefix = rsm.config. policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.479526934Z level=info msg="Executing migration" id="create team member table" policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | remote.log.storage.system.enable = false policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.480654284Z level=info msg="Migration successfully executed" id="create team member table" duration=1.12252ms policy-pap | ssl.cipher.suites = null kafka | replica.fetch.backoff.ms = 1000 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.484942903Z level=info msg="Executing migration" id="add index team_member.org_id" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | replica.fetch.max.bytes = 1048576 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-02-19T09:49:08.48673734Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.794147ms policy-pap | ssl.endpoint.identification.algorithm = https kafka | replica.fetch.min.bytes = 1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.49019792Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-pap | ssl.engine.factory.class = null kafka | replica.fetch.response.max.bytes = 10485760 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-19T09:49:08.491096548Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=898.228µs policy-pap | ssl.key.password = null kafka | replica.fetch.wait.max.ms = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.494700851Z level=info msg="Executing migration" id="add index team_member.team_id" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.49564722Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=946.119µs policy-pap | ssl.keystore.certificate.chain = null kafka | replica.lag.time.max.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.499499284Z level=info msg="Executing migration" id="Add column email to team table" policy-pap | ssl.keystore.key = null kafka | replica.selector.class = null policy-db-migrator | > upgrade 0570-toscadatatype.sql grafana | logger=migrator t=2024-02-19T09:49:08.504258727Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.759303ms policy-pap | ssl.keystore.location = null kafka | replica.socket.receive.buffer.bytes = 65536 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.507460756Z level=info msg="Executing migration" id="Add column external to team_member table" policy-pap | ssl.keystore.password = null kafka | replica.socket.timeout.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.512023667Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.562411ms policy-pap | ssl.keystore.type = JKS kafka | replication.quota.window.num = 11 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.515513858Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-pap | ssl.protocol = TLSv1.3 kafka | replication.quota.window.size.seconds = 1 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.520263401Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.745193ms policy-pap | ssl.provider = null kafka | request.timeout.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.525544308Z level=info msg="Executing migration" id="create dashboard acl table" policy-pap | ssl.secure.random.implementation = null kafka | reserved.broker.max.id = 1000 policy-db-migrator | > upgrade 0580-toscadatatypes.sql grafana | logger=migrator t=2024-02-19T09:49:08.526467667Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=926.529µs policy-pap | ssl.trustmanager.algorithm = PKIX kafka | sasl.client.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.529982468Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-pap | ssl.truststore.certificates = null kafka | sasl.enabled.mechanisms = [GSSAPI] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.531370321Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.387183ms policy-pap | ssl.truststore.location = null kafka | sasl.jaas.config = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.534825172Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-pap | ssl.truststore.password = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.536447286Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.616275ms policy-pap | ssl.truststore.type = JKS kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.540978137Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql grafana | logger=migrator t=2024-02-19T09:49:08.542145767Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.16695ms policy-pap | kafka | sasl.kerberos.service.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.54690636Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-pap | [2024-02-19T09:49:35.127+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-19T09:49:08.54806997Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.16957ms policy-pap | [2024-02-19T09:49:35.127+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.552225178Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-pap | [2024-02-19T09:49:35.127+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336175127 kafka | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.552905434Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=686.096µs policy-pap | [2024-02-19T09:49:35.127+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | sasl.login.class = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.55584483Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" kafka | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-pap | [2024-02-19T09:49:35.499+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json grafana | logger=migrator t=2024-02-19T09:49:08.556740069Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=894.969µs kafka | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:35.645+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning grafana | logger=migrator t=2024-02-19T09:49:08.56135922Z level=info msg="Executing migration" id="add index dashboard_permission" kafka | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-pap | [2024-02-19T09:49:35.926+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@38197e82, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5516ee5, org.springframework.security.web.context.SecurityContextHolderFilter@6c1a63f7, org.springframework.security.web.header.HeaderWriterFilter@26a202ae, org.springframework.security.web.authentication.logout.LogoutFilter@284b487f, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@2b4954a4, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@72240290, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@34e9de8d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6765b6a2, org.springframework.security.web.access.ExceptionTranslationFilter@1f1ffc18, org.springframework.security.web.access.intercept.AuthorizationFilter@6ee186f3] grafana | logger=migrator t=2024-02-19T09:49:08.562774663Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.414103ms kafka | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:36.883+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' grafana | logger=migrator t=2024-02-19T09:49:08.57036334Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" kafka | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | policy-pap | [2024-02-19T09:49:36.994+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] grafana | logger=migrator t=2024-02-19T09:49:08.571122748Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=759.268µs kafka | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | policy-pap | [2024-02-19T09:49:37.018+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' grafana | logger=migrator t=2024-02-19T09:49:08.575401726Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" kafka | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-pap | [2024-02-19T09:49:37.036+00:00|INFO|ServiceManager|main] Policy PAP starting grafana | logger=migrator t=2024-02-19T09:49:08.575740109Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=338.643µs kafka | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.036+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry grafana | logger=migrator t=2024-02-19T09:49:08.580549652Z level=info msg="Executing migration" id="create tag table" kafka | sasl.mechanism.controller.protocol = GSSAPI policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-pap | [2024-02-19T09:49:37.037+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters grafana | logger=migrator t=2024-02-19T09:49:08.58145846Z level=info msg="Migration successfully executed" id="create tag table" duration=906.758µs kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.037+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener grafana | logger=migrator t=2024-02-19T09:49:08.584100954Z level=info msg="Executing migration" id="add index tag.key_value" kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | policy-pap | [2024-02-19T09:49:37.037+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher grafana | logger=migrator t=2024-02-19T09:49:08.584995273Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=890.589µs kafka | sasl.oauthbearer.expected.audience = null policy-db-migrator | policy-pap | [2024-02-19T09:49:37.038+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher grafana | logger=migrator t=2024-02-19T09:49:08.588695756Z level=info msg="Executing migration" id="create login attempt table" kafka | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-pap | [2024-02-19T09:49:37.038+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher grafana | logger=migrator t=2024-02-19T09:49:08.589732225Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.040038ms kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.042+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5cceb518-7b72-41da-b42c-3c8775105be3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@748aa7dc grafana | logger=migrator t=2024-02-19T09:49:08.593016915Z level=info msg="Executing migration" id="add index login_attempt.username" kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | [2024-02-19T09:49:37.054+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5cceb518-7b72-41da-b42c-3c8775105be3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.593746651Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=729.626µs policy-pap | [2024-02-19T09:49:37.054+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.597972969Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-pap | allow.auto.create.topics = true kafka | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.598858017Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=884.858µs policy-pap | auto.commit.interval.ms = 5000 kafka | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0630-toscanodetype.sql grafana | logger=migrator t=2024-02-19T09:49:08.603279566Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-pap | auto.include.jmx.reporter = true kafka | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.622046655Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=18.766069ms policy-pap | auto.offset.reset = latest kafka | sasl.server.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.626818698Z level=info msg="Executing migration" id="create login_attempt v2" policy-pap | bootstrap.servers = [kafka:9092] kafka | sasl.server.max.receive.size = 524288 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.627301203Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=481.945µs policy-pap | check.crcs = true kafka | security.inter.broker.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.632124295Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-pap | client.dns.lookup = use_all_dns_ips kafka | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.632769372Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=644.617µs policy-pap | client.id = consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3 kafka | server.max.startup.time.ms = 9223372036854775807 policy-db-migrator | > upgrade 0640-toscanodetypes.sql grafana | logger=migrator t=2024-02-19T09:49:08.635640578Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-pap | client.rack = kafka | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.635842679Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=203.821µs policy-pap | connections.max.idle.ms = 540000 kafka | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.637831568Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-pap | default.api.timeout.ms = 60000 kafka | socket.listen.backlog.size = 50 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.638246471Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=414.973µs policy-pap | enable.auto.commit = true kafka | socket.receive.buffer.bytes = 102400 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.641903514Z level=info msg="Executing migration" id="create user auth table" policy-pap | exclude.internal.topics = true kafka | socket.request.max.bytes = 104857600 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.642457938Z level=info msg="Migration successfully executed" id="create user auth table" duration=554.014µs policy-pap | fetch.max.bytes = 52428800 kafka | socket.send.buffer.bytes = 102400 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql grafana | logger=migrator t=2024-02-19T09:49:08.645771778Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-pap | fetch.max.wait.ms = 500 kafka | ssl.cipher.suites = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.646464645Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=695.807µs policy-pap | fetch.min.bytes = 1 kafka | ssl.client.auth = none policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-19T09:49:08.649492052Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-pap | group.id = 5cceb518-7b72-41da-b42c-3c8775105be3 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.649545623Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=53.951µs policy-pap | group.instance.id = null kafka | ssl.endpoint.identification.algorithm = https policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.65375843Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-pap | heartbeat.interval.ms = 3000 kafka | ssl.engine.factory.class = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.659527982Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.769032ms policy-pap | interceptor.classes = [] kafka | ssl.key.password = null policy-db-migrator | > upgrade 0660-toscaparameter.sql grafana | logger=migrator t=2024-02-19T09:49:08.664064373Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-pap | internal.leave.group.on.close = true kafka | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.669545992Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.485669ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | ssl.keystore.certificate.chain = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-19T09:49:08.673963811Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-pap | isolation.level = read_uncommitted kafka | ssl.keystore.key = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.678941916Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.977775ms policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | ssl.keystore.location = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.683170115Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-pap | max.partition.fetch.bytes = 1048576 kafka | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.688337391Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.170966ms policy-pap | max.poll.interval.ms = 300000 kafka | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0670-toscapolicies.sql grafana | logger=migrator t=2024-02-19T09:49:08.691047676Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-pap | max.poll.records = 500 kafka | ssl.principal.mapping.rules = DEFAULT policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.691986064Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=938.228µs policy-pap | metadata.max.age.ms = 300000 kafka | ssl.protocol = TLSv1.3 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.696465374Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-pap | metric.reporters = [] kafka | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.701389788Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.923984ms policy-pap | metrics.num.samples = 2 kafka | ssl.secure.random.implementation = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.704306854Z level=info msg="Executing migration" id="create server_lock table" policy-pap | metrics.recording.level = INFO kafka | ssl.trustmanager.algorithm = PKIX policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.705028751Z level=info msg="Migration successfully executed" id="create server_lock table" duration=722.267µs policy-pap | metrics.sample.window.ms = 30000 kafka | ssl.truststore.certificates = null policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql grafana | logger=migrator t=2024-02-19T09:49:08.708586693Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | ssl.truststore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.709481101Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=894.528µs policy-pap | receive.buffer.bytes = 65536 kafka | ssl.truststore.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-19T09:49:08.714022202Z level=info msg="Executing migration" id="create user auth token table" policy-pap | reconnect.backoff.max.ms = 1000 kafka | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.714783829Z level=info msg="Migration successfully executed" id="create user auth token table" duration=764.857µs policy-pap | reconnect.backoff.ms = 50 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.71937894Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | request.timeout.ms = 30000 kafka | transaction.max.timeout.ms = 900000 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.720330849Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=951.649µs policy-pap | retry.backoff.ms = 100 kafka | transaction.partition.verification.enable = true policy-db-migrator | > upgrade 0690-toscapolicy.sql grafana | logger=migrator t=2024-02-19T09:49:08.726289282Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | sasl.client.callback.handler.class = null kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.727722725Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.432663ms policy-pap | sasl.jaas.config = null kafka | transaction.state.log.load.buffer.size = 5242880 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.735649856Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | transaction.state.log.min.isr = 2 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.736675645Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.014819ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | transaction.state.log.num.partitions = 50 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.743395106Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-pap | sasl.kerberos.service.name = null kafka | transaction.state.log.replication.factor = 3 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.75160578Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.211013ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | transaction.state.log.segment.bytes = 104857600 policy-db-migrator | > upgrade 0700-toscapolicytype.sql grafana | logger=migrator t=2024-02-19T09:49:08.754528096Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | transactional.id.expiration.ms = 604800000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.755467605Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=938.779µs policy-pap | sasl.login.callback.handler.class = null kafka | unclean.leader.election.enable = false policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.761311427Z level=info msg="Executing migration" id="create cache_data table" policy-pap | sasl.login.class = null kafka | unstable.api.versions.enable = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.762078054Z level=info msg="Migration successfully executed" id="create cache_data table" duration=766.507µs policy-pap | sasl.login.connect.timeout.ms = null kafka | zookeeper.clientCnxnSocket = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.765087901Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-pap | sasl.login.read.timeout.ms = null kafka | zookeeper.connect = zookeeper:2181 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.766655465Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.573534ms policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | zookeeper.connection.timeout.ms = null policy-db-migrator | > upgrade 0710-toscapolicytypes.sql grafana | logger=migrator t=2024-02-19T09:49:08.769743813Z level=info msg="Executing migration" id="create short_url table v1" policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | zookeeper.max.in.flight.requests = 10 policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | zookeeper.metadata.migration.enable = false grafana | logger=migrator t=2024-02-19T09:49:08.77057099Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=827.497µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | zookeeper.session.timeout.ms = 18000 grafana | logger=migrator t=2024-02-19T09:49:08.778280919Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | zookeeper.set.acl = false grafana | logger=migrator t=2024-02-19T09:49:08.780308247Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.030858ms policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 kafka | zookeeper.ssl.cipher.suites = null grafana | logger=migrator t=2024-02-19T09:49:08.815212971Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | zookeeper.ssl.client.enable = false grafana | logger=migrator t=2024-02-19T09:49:08.815332382Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=120.831µs policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | zookeeper.ssl.crl.enable = false grafana | logger=migrator t=2024-02-19T09:49:08.820773201Z level=info msg="Executing migration" id="delete alert_definition table" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-02-19T09:49:08.821095864Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=322.483µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.oauthbearer.expected.issuer = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-02-19T09:49:08.825633065Z level=info msg="Executing migration" id="recreate alert_definition table" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-02-19T09:49:08.827128568Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.494513ms policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-02-19T09:49:08.831058144Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-02-19T09:49:08.832235194Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.17721ms policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-19T09:49:08.838682942Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-db-migrator | -------------- kafka | zookeeper.ssl.ocsp.enable = false policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-19T09:49:08.840036265Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.355163ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | zookeeper.ssl.protocol = TLSv1.2 policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-19T09:49:08.844563095Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" kafka | zookeeper.ssl.truststore.location = null policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.844672036Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=109.751µs kafka | zookeeper.ssl.truststore.password = null policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.849299677Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | zookeeper.ssl.truststore.type = null policy-pap | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.850356466Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.056929ms kafka | (kafka.server.KafkaConfig) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql grafana | logger=migrator t=2024-02-19T09:49:08.853564316Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" kafka | [2024-02-19 09:49:09,847] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.854547805Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=983.339µs kafka | [2024-02-19 09:49:09,848] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.85849548Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" kafka | [2024-02-19 09:49:09,849] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.859454639Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=958.919µs kafka | [2024-02-19 09:49:09,851] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.862361575Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-pap | ssl.cipher.suites = null kafka | [2024-02-19 09:49:09,888] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.863346694Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=984.599µs policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-19 09:49:09,895] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql grafana | logger=migrator t=2024-02-19T09:49:08.867029376Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-19 09:49:09,904] INFO Loaded 0 logs in 15ms (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.872875489Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.845693ms policy-pap | ssl.engine.factory.class = null kafka | [2024-02-19 09:49:09,906] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.876169399Z level=info msg="Executing migration" id="drop alert_definition table" policy-pap | ssl.key.password = null kafka | [2024-02-19 09:49:09,908] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.877122678Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=956.419µs policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-19 09:49:09,919] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.880360457Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-19 09:49:09,963] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.880491888Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=131.171µs policy-pap | ssl.keystore.key = null kafka | [2024-02-19 09:49:10,002] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-02-19T09:49:08.887567882Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-pap | ssl.keystore.location = null kafka | [2024-02-19 09:49:10,016] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.888458389Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=880.027µs policy-pap | ssl.keystore.password = null kafka | [2024-02-19 09:49:10,044] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-19T09:49:08.891973601Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-pap | ssl.keystore.type = JKS kafka | [2024-02-19 09:49:10,412] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.893760237Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.786326ms policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-19 09:49:10,433] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.89747475Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-pap | ssl.provider = null kafka | [2024-02-19 09:49:10,434] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.89849378Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.01876ms policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-19 09:49:10,440] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-db-migrator | > upgrade 0770-toscarequirement.sql grafana | logger=migrator t=2024-02-19T09:49:08.903706446Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-19 09:49:10,445] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.903769177Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=63.211µs kafka | [2024-02-19 09:49:10,470] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.909082864Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-pap | ssl.truststore.certificates = null kafka | [2024-02-19 09:49:10,475] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.91081689Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.728096ms policy-pap | ssl.truststore.location = null kafka | [2024-02-19 09:49:10,477] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-19T09:49:08.916980916Z level=info msg="Executing migration" id="create alert_instance table" policy-pap | ssl.truststore.password = null kafka | [2024-02-19 09:49:10,479] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | kafka | [2024-02-19 09:49:10,480] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | kafka | [2024-02-19 09:49:10,494] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) grafana | logger=migrator t=2024-02-19T09:49:08.918084826Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.10432ms grafana | logger=migrator t=2024-02-19T09:49:08.923837098Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-pap | ssl.truststore.type = JKS kafka | [2024-02-19 09:49:10,495] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) policy-db-migrator | > upgrade 0780-toscarequirements.sql grafana | logger=migrator t=2024-02-19T09:49:08.924844757Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.007698ms policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-02-19 09:49:10,522] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.927703832Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-pap | kafka | [2024-02-19 09:49:10,571] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1708336150557,1708336150557,1,0,0,72057609922936833,258,0,27 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) grafana | logger=migrator t=2024-02-19T09:49:08.92867112Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=966.908µs policy-pap | [2024-02-19T09:49:37.060+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | (kafka.zk.KafkaZkClient) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:08.931452325Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-pap | [2024-02-19T09:49:37.061+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-19 09:49:10,573] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:08.937242727Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.789952ms policy-pap | [2024-02-19T09:49:37.061+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336177060 kafka | [2024-02-19 09:49:10,627] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql kafka | [2024-02-19 09:49:10,635] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-19T09:49:08.942532245Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-02-19 09:49:10,643] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-19T09:49:08.943416734Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=883.979µs policy-db-migrator | -------------- policy-db-migrator | kafka | [2024-02-19 09:49:10,645] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-19T09:49:08.946103328Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql kafka | [2024-02-19 09:49:10,656] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-02-19T09:49:08.946995056Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=891.628µs policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.061+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Subscribed to topic(s): policy-pdp-pap kafka | [2024-02-19 09:49:10,661] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-02-19T09:49:08.949835751Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,667] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:08.985487792Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=35.655941ms policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:10,668] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-02-19T09:49:08.989970171Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,672] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.023309951Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=33.33979ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,677] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-02-19T09:49:09.025636422Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:10,689] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-02-19T09:49:09.026419599Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=774.397µs policy-pap | [2024-02-19T09:49:37.061+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher kafka | [2024-02-19 09:49:10,693] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) grafana | logger=migrator t=2024-02-19T09:49:09.030370565Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,694] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-02-19T09:49:09.031574135Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.2031ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,706] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.03651579Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:10,708] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) grafana | logger=migrator t=2024-02-19T09:49:09.044633813Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.118673ms policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,710] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.049443116Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,713] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.059748219Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=10.307773ms policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:10,715] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.062808856Z level=info msg="Executing migration" id="create alert_rule table" policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,730] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-19T09:49:09.063483042Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=674.136µs policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-pap | [2024-02-19T09:49:37.061+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=debee999-74ee-44fc-bcec-6edc4c9db18d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@5ab3f611 kafka | [2024-02-19 09:49:10,738] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.070640217Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-pap | [2024-02-19T09:49:37.061+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=debee999-74ee-44fc-bcec-6edc4c9db18d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-02-19T09:49:37.062+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-02-19 09:49:10,745] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.072417193Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.775766ms policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-02-19 09:49:10,754] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) grafana | logger=migrator t=2024-02-19T09:49:09.075585751Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest kafka | [2024-02-19 09:49:10,765] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) grafana | logger=migrator t=2024-02-19T09:49:09.077262426Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.676365ms policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true kafka | [2024-02-19 09:49:10,765] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-02-19T09:49:09.080463105Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 grafana | logger=migrator t=2024-02-19T09:49:09.082347332Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.883066ms kafka | [2024-02-19 09:49:10,767] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-19T09:49:09.089352295Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" kafka | [2024-02-19 09:49:10,767] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-02-19T09:49:09.089553347Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=206.402µs kafka | [2024-02-19 09:49:10,767] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-02-19T09:49:09.094860605Z level=info msg="Executing migration" id="add column for to alert_rule" kafka | [2024-02-19 09:49:10,768] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-02-19T09:49:09.101527324Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.66712ms kafka | [2024-02-19 09:49:10,771] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-pap | group.id = policy-pap policy-pap | group.instance.id = null grafana | logger=migrator t=2024-02-19T09:49:09.104769643Z level=info msg="Executing migration" id="add column annotations to alert_rule" kafka | [2024-02-19 09:49:10,771] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-19T09:49:09.112354861Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.586638ms kafka | [2024-02-19 09:49:10,771] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-19T09:49:09.120121802Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-02-19 09:49:10,772] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-19T09:49:09.126479888Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.357786ms kafka | [2024-02-19 09:49:10,773] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-19T09:49:09.129503426Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" kafka | [2024-02-19 09:49:10,775] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-19T09:49:09.130429054Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=925.528µs kafka | [2024-02-19 09:49:10,783] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-19T09:49:09.133325569Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" kafka | [2024-02-19 09:49:10,784] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-19T09:49:09.134338139Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.01214ms kafka | [2024-02-19 09:49:10,784] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-19T09:49:09.138611367Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" kafka | [2024-02-19 09:49:10,788] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-19T09:49:09.14451513Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.903073ms kafka | [2024-02-19 09:49:10,792] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-19T09:49:09.152173679Z level=info msg="Executing migration" id="add panel_id column to alert_rule" kafka | [2024-02-19 09:49:10,794] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-19T09:49:09.159079662Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.909173ms kafka | [2024-02-19 09:49:10,794] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-02-19T09:49:09.163018567Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | [2024-02-19 09:49:10,796] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-19T09:49:09.163830774Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=812.117µs kafka | [2024-02-19 09:49:10,801] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-02-19T09:49:09.176030614Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" kafka | [2024-02-19 09:49:10,801] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-02-19T09:49:09.185792191Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.775367ms kafka | [2024-02-19 09:49:10,803] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-19T09:49:09.188921079Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" kafka | [2024-02-19 09:49:10,805] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-02-19T09:49:09.193173248Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.252309ms kafka | [2024-02-19 09:49:10,811] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.9:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-19T09:49:09.196219255Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" kafka | [2024-02-19 09:49:10,813] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-19T09:49:09.196372766Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=152.841µs kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) grafana | logger=migrator t=2024-02-19T09:49:09.200131Z level=info msg="Executing migration" id="create alert_rule_version table" policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) grafana | logger=migrator t=2024-02-19T09:49:09.20121955Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.08783ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) grafana | logger=migrator t=2024-02-19T09:49:09.205000864Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) grafana | logger=migrator t=2024-02-19T09:49:09.206313985Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.315731ms policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:10,816] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) grafana | logger=migrator t=2024-02-19T09:49:09.209778817Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,816] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.211170679Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.391342ms policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,817] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.214287377Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:10,817] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.214407648Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=120.131µs policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,818] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.217347894Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,820] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.223640612Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.291908ms policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:10,820] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-02-19T09:49:09.228543185Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,820] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-02-19T09:49:09.235166375Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.61997ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,821] INFO Kafka startTimeMs: 1708336150811 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-02-19T09:49:09.241174609Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:10,823] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) grafana | logger=migrator t=2024-02-19T09:49:09.24802844Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.858621ms policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,834] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.252176438Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:10,921] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-02-19T09:49:09.262408849Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=10.228521ms policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:11,005] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.265373077Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:11,052] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-02-19T09:49:09.270192799Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.819682ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:11,059] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-02-19T09:49:09.274026154Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-db-migrator | policy-db-migrator | kafka | [2024-02-19 09:49:15,836] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.274132455Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=107.511µs policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- kafka | [2024-02-19 09:49:15,837] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-19T09:49:09.277154333Z level=info msg="Executing migration" id=create_alert_configuration_table policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,613] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-02-19T09:49:09.278152651Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=997.928µs policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.280996537Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-02-19 09:49:37,614] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.287762018Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.764551ms kafka | [2024-02-19 09:49:37,622] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.291715203Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-02-19 09:49:37,627] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.291879354Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=160.661µs kafka | [2024-02-19 09:49:37,664] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(veTrfW6FRamNLDmnsQwlbQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(sGJ4pqvYRN2F_jW5d81sYw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.296548217Z level=info msg="Executing migration" id="add column org_id in alert_configuration" kafka | [2024-02-19 09:49:37,665] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-19T09:49:09.301732404Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.187186ms kafka | [2024-02-19 09:49:37,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-19T09:49:09.304769081Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-02-19 09:49:37,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-19T09:49:09.305697179Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=927.658µs kafka | [2024-02-19 09:49:37,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-19T09:49:09.311588742Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" kafka | [2024-02-19 09:49:37,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-pap | security.providers = null grafana | logger=migrator t=2024-02-19T09:49:09.316181533Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.591141ms kafka | [2024-02-19 09:49:37,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-19T09:49:09.320979206Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-pap | session.timeout.ms = 45000 kafka | [2024-02-19 09:49:37,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.321618382Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=638.336µs policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-02-19 09:49:37,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.32592761Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-19 09:49:37,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.326656816Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=724.566µs policy-db-migrator | policy-pap | ssl.cipher.suites = null kafka | [2024-02-19 09:49:37,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.330816225Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-19 09:49:37,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.335298765Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.48236ms policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-19 09:49:37,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.338285982Z level=info msg="Executing migration" id="create provenance_type table" policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null kafka | [2024-02-19 09:49:37,674] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.338792177Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=510.524µs policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-19 09:49:37,674] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.342473269Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-19 09:49:37,674] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.343625999Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.15706ms policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | [2024-02-19 09:49:37,674] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.350150228Z level=info msg="Executing migration" id="create alert_image table" policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | ssl.keystore.location = null kafka | [2024-02-19 09:49:37,674] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null kafka | [2024-02-19 09:49:37,674] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.350919775Z level=info msg="Migration successfully executed" id="create alert_image table" duration=769.767µs policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.keystore.type = JKS kafka | [2024-02-19 09:49:37,675] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.354099373Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-19 09:49:37,675] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.355788649Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.688686ms policy-db-migrator | policy-pap | ssl.provider = null kafka | [2024-02-19 09:49:37,675] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.359579973Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-db-migrator | policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-19 09:49:37,675] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.359684764Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=105.431µs policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-19 09:49:37,675] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.364503207Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-pap | ssl.truststore.certificates = null kafka | [2024-02-19 09:49:37,675] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-19T09:49:09.365362145Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=858.747µs policy-pap | ssl.truststore.location = null kafka | [2024-02-19 09:49:37,676] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.36922218Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-pap | ssl.truststore.password = null kafka | [2024-02-19 09:49:37,676] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.370644132Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.420822ms policy-pap | ssl.truststore.type = JKS kafka | [2024-02-19 09:49:37,676] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.373646479Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-02-19 09:49:37,676] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql grafana | logger=migrator t=2024-02-19T09:49:09.374327045Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-pap | kafka | [2024-02-19 09:49:37,677] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.380066996Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-pap | [2024-02-19T09:49:37.066+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-02-19 09:49:37,677] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-19T09:49:09.380497951Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=432.215µs policy-pap | [2024-02-19T09:49:37.066+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-19 09:49:37,678] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.385226593Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-pap | [2024-02-19T09:49:37.066+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336177066 kafka | [2024-02-19 09:49:37,679] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.387228981Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.998128ms kafka | [2024-02-19 09:49:37,679] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-02-19T09:49:37.066+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.392429028Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-02-19 09:49:37,679] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-02-19T09:49:37.066+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-02-19T09:49:09.399287269Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.858001ms kafka | [2024-02-19 09:49:37,679] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-02-19T09:49:37.067+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=debee999-74ee-44fc-bcec-6edc4c9db18d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.404293845Z level=info msg="Executing migration" id="create library_element table v1" kafka | [2024-02-19 09:49:37,679] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-02-19T09:49:37.067+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5cceb518-7b72-41da-b42c-3c8775105be3, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-19T09:49:09.405013841Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=720.106µs kafka | [2024-02-19 09:49:37,679] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-02-19T09:49:37.067+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2b1fcce7-d5e6-434b-91b8-c008e22c2873, alive=false, publisher=null]]: starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.411602211Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" kafka | [2024-02-19 09:49:37,680] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-02-19T09:49:37.084+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.413420916Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.811206ms kafka | [2024-02-19 09:49:37,680] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | acks = -1 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.416354823Z level=info msg="Executing migration" id="create library_element_connection table v1" kafka | [2024-02-19 09:49:37,680] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-02-19T09:49:09.417413653Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.05151ms kafka | [2024-02-19 09:49:37,680] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | batch.size = 16384 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.421909503Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-02-19 09:49:37,680] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-19T09:49:09.422700821Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=791.387µs kafka | [2024-02-19 09:49:37,680] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.425263784Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-02-19 09:49:37,680] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.42605436Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=790.137µs kafka | [2024-02-19 09:49:37,681] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | client.id = producer-1 policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.428648754Z level=info msg="Executing migration" id="increase max description length to 2048" kafka | [2024-02-19 09:49:37,681] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | compression.type = none policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-02-19T09:49:09.428677074Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=28.74µs policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-19 09:49:37,681] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.433385647Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-pap | delivery.timeout.ms = 120000 kafka | [2024-02-19 09:49:37,681] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-19T09:49:09.433465427Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=80.31µs policy-pap | enable.idempotence = true kafka | [2024-02-19 09:49:37,681] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.436474154Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-pap | interceptor.classes = [] kafka | [2024-02-19 09:49:37,681] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.436998679Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=521.565µs policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-19 09:49:37,681] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.440186587Z level=info msg="Executing migration" id="create data_keys table" policy-pap | linger.ms = 0 kafka | [2024-02-19 09:49:37,682] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.441905403Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.718366ms policy-pap | max.block.ms = 60000 kafka | [2024-02-19 09:49:37,682] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-02-19T09:49:09.446366493Z level=info msg="Executing migration" id="create secrets table" policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-02-19 09:49:37,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.447143789Z level=info msg="Migration successfully executed" id="create secrets table" duration=776.846µs policy-pap | max.request.size = 1048576 kafka | [2024-02-19 09:49:37,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-19T09:49:09.453137864Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-pap | metadata.max.age.ms = 300000 kafka | [2024-02-19 09:49:37,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.502507107Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=49.369853ms policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-02-19 09:49:37,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.505322272Z level=info msg="Executing migration" id="add name column into data_keys" policy-pap | metric.reporters = [] kafka | [2024-02-19 09:49:37,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.512466747Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.143045ms policy-pap | metrics.num.samples = 2 kafka | [2024-02-19 09:49:37,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.516429722Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-pap | metrics.recording.level = INFO policy-db-migrator | kafka | [2024-02-19 09:49:37,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.516617204Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=187.362µs policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-02-19 09:49:37,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.522310105Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | -------------- policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-02-19 09:49:37,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.573673367Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=51.366712ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-02-19 09:49:37,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.591593228Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-db-migrator | -------------- policy-pap | partitioner.class = null kafka | [2024-02-19 09:49:37,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.641121243Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=49.537755ms policy-db-migrator | policy-pap | partitioner.ignore.keys = false kafka | [2024-02-19 09:49:37,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.645408191Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | policy-pap | receive.buffer.bytes = 32768 kafka | [2024-02-19 09:49:37,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.646082527Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=673.946µs policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-02-19 09:49:37,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.652312144Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 kafka | [2024-02-19 09:49:37,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.654321861Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.008397ms policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | request.timeout.ms = 30000 kafka | [2024-02-19 09:49:37,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.659570138Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-pap | retries = 2147483647 kafka | [2024-02-19 09:49:37,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.659891091Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=324.053µs policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 kafka | [2024-02-19 09:49:37,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.662950489Z level=info msg="Executing migration" id="create permission table" policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null kafka | [2024-02-19 09:49:37,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.663873947Z level=info msg="Migration successfully executed" id="create permission table" duration=923.229µs policy-db-migrator | policy-pap | sasl.jaas.config = null kafka | [2024-02-19 09:49:37,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-19T09:49:09.668584219Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-02-19 09:49:37,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-02-19T09:49:09.66966632Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.081951ms kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-02-19T09:49:09.673718605Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-19T09:49:09.67524403Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.523924ms kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-02-19T09:49:09.679958232Z level=info msg="Executing migration" id="create role table" kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-02-19T09:49:09.681309314Z level=info msg="Migration successfully executed" id="create role table" duration=1.349592ms kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-02-19T09:49:09.684927936Z level=info msg="Executing migration" id="add column display_name" policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-02-19T09:49:09.692916788Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.988472ms policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-19T09:49:09.698430968Z level=info msg="Executing migration" id="add column group_name" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-19T09:49:09.711434585Z level=info msg="Migration successfully executed" id="add column group_name" duration=13.010328ms policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-19T09:49:09.714493102Z level=info msg="Executing migration" id="add index role.org_id" policy-db-migrator | kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-02-19T09:49:09.715284419Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=799.977µs policy-db-migrator | kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-19T09:49:09.719195025Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-db-migrator | > upgrade 0100-pdp.sql kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-19T09:49:09.720247934Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.052399ms policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-19T09:49:09.724209859Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-19T09:49:09.72538869Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.178421ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.728554678Z level=info msg="Executing migration" id="create team role table" policy-db-migrator | policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.729443126Z level=info msg="Migration successfully executed" id="create team role table" duration=887.798µs policy-db-migrator | policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.732611885Z level=info msg="Executing migration" id="add index team_role.org_id" policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-02-19 09:49:37,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.733923707Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.308722ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-19 09:49:37,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.737949073Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-19 09:49:37,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.739306935Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.357402ms policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-19 09:49:37,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.743930066Z level=info msg="Executing migration" id="add index team_role.team_id" policy-db-migrator | policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-19 09:49:37,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.745086077Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.156361ms policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-19 09:49:37,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.74883347Z level=info msg="Executing migration" id="create user role table" policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-02-19 09:49:37,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.749769718Z level=info msg="Migration successfully executed" id="create user role table" duration=935.898µs policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-19 09:49:37,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.753870786Z level=info msg="Executing migration" id="add index user_role.org_id" policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | security.providers = null kafka | [2024-02-19 09:49:37,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.755019936Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.14898ms policy-db-migrator | -------------- policy-pap | send.buffer.bytes = 131072 kafka | [2024-02-19 09:49:37,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.758214034Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-02-19 09:49:37,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.759457726Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.243842ms policy-db-migrator | policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-19 09:49:37,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.762695655Z level=info msg="Executing migration" id="add index user_role.user_id" policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-pap | ssl.cipher.suites = null kafka | [2024-02-19 09:49:37,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.763871515Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.17558ms policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-19 09:49:37,694] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.76992838Z level=info msg="Executing migration" id="create builtin role table" policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.770907459Z level=info msg="Migration successfully executed" id="create builtin role table" duration=978.479µs policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.773982937Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-db-migrator | policy-pap | ssl.key.password = null kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.775145268Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.153631ms policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.779250284Z level=info msg="Executing migration" id="add index builtin_role.name" policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.780595655Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.344921ms policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.786383969Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-pap | ssl.keystore.location = null kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.797367586Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=10.983757ms policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.800739047Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | policy-pap | ssl.keystore.type = JKS kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.802054019Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.314872ms policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.805066786Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-pap | ssl.provider = null kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.806371298Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.304262ms policy-db-migrator | -------------- policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-19 09:49:37,861] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.810219732Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-19 09:49:37,869] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.811453873Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.227821ms policy-db-migrator | policy-pap | ssl.truststore.certificates = null kafka | [2024-02-19 09:49:37,870] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.814827414Z level=info msg="Executing migration" id="add unique index role.uid" policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-19T09:49:09.816416048Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.593994ms kafka | [2024-02-19 09:49:37,870] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-02-19T09:49:09.819463155Z level=info msg="Executing migration" id="create seed assignment table" kafka | [2024-02-19 09:49:37,870] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-19T09:49:09.820258813Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=795.058µs kafka | [2024-02-19 09:49:37,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-02-19T09:49:09.826261987Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-02-19 09:49:37,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | transactional.id = null grafana | logger=migrator t=2024-02-19T09:49:09.828184123Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.921536ms kafka | [2024-02-19 09:49:37,871] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-19T09:49:09.831612105Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-02-19 09:49:37,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-pap | grafana | logger=migrator t=2024-02-19T09:49:09.84116289Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.551835ms kafka | [2024-02-19 09:49:37,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.098+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-02-19 09:49:37,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME grafana | logger=migrator t=2024-02-19T09:49:09.844110787Z level=info msg="Executing migration" id="permission kind migration" policy-pap | [2024-02-19T09:49:37.114+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-02-19 09:49:37,872] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.850723607Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.60904ms policy-pap | [2024-02-19T09:49:37.114+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-19 09:49:37,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.855526099Z level=info msg="Executing migration" id="permission attribute migration" policy-pap | [2024-02-19T09:49:37.114+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336177113 kafka | [2024-02-19 09:49:37,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.863868434Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.340335ms policy-pap | [2024-02-19T09:49:37.114+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2b1fcce7-d5e6-434b-91b8-c008e22c2873, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-02-19 09:49:37,873] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-19T09:49:09.869157902Z level=info msg="Executing migration" id="permission identifier migration" policy-pap | [2024-02-19T09:49:37.114+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=cd388c27-810b-4229-a9ff-3b8814edd721, alive=false, publisher=null]]: starting kafka | [2024-02-19 09:49:37,874] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.877656708Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.495726ms policy-pap | [2024-02-19T09:49:37.114+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-02-19 09:49:37,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | UPDATE jpapdpstatistics_enginestats a grafana | logger=migrator t=2024-02-19T09:49:09.880834767Z level=info msg="Executing migration" id="add permission identifier index" policy-pap | acks = -1 kafka | [2024-02-19 09:49:37,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | JOIN pdpstatistics b grafana | logger=migrator t=2024-02-19T09:49:09.882099278Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.263762ms policy-pap | auto.include.jmx.reporter = true kafka | [2024-02-19 09:49:37,874] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp grafana | logger=migrator t=2024-02-19T09:49:09.888792838Z level=info msg="Executing migration" id="create query_history table v1" policy-pap | batch.size = 16384 kafka | [2024-02-19 09:49:37,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | SET a.id = b.id grafana | logger=migrator t=2024-02-19T09:49:09.889918398Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.12554ms policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-02-19 09:49:37,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.893040377Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-pap | buffer.memory = 33554432 kafka | [2024-02-19 09:49:37,875] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.894358538Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.310401ms policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-02-19 09:49:37,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.898326943Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-pap | client.id = producer-2 kafka | [2024-02-19 09:49:37,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-19T09:49:09.898462725Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=135.812µs policy-pap | compression.type = none kafka | [2024-02-19 09:49:37,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.90229626Z level=info msg="Executing migration" id="rbac disabled migrator" policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-19 09:49:37,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-02-19T09:49:09.9023554Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=59.98µs policy-pap | delivery.timeout.ms = 120000 kafka | [2024-02-19 09:49:37,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:09.906673348Z level=info msg="Executing migration" id="teams permissions migration" policy-pap | enable.idempotence = true kafka | [2024-02-19 09:49:37,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:09.907436696Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=763.268µs policy-pap | interceptor.classes = [] policy-db-migrator | kafka | [2024-02-19 09:49:37,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.914147496Z level=info msg="Executing migration" id="dashboard permissions" policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql kafka | [2024-02-19 09:49:37,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.914892962Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=746.696µs policy-pap | linger.ms = 0 policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.918183763Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-pap | max.block.ms = 60000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) kafka | [2024-02-19 09:49:37,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.91906614Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=874.107µs policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.92352409Z level=info msg="Executing migration" id="drop managed folder create actions" policy-pap | max.request.size = 1048576 policy-db-migrator | kafka | [2024-02-19 09:49:37,879] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.923893324Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=369.724µs policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | kafka | [2024-02-19 09:49:37,879] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:09.927206084Z level=info msg="Executing migration" id="alerting notification permissions" policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-02-19 09:49:37,879] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-02-19T09:49:09.927604597Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=399.093µs policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,879] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-19T09:49:09.933008986Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) kafka | [2024-02-19 09:49:37,880] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-19T09:49:09.933910804Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=901.758µs policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,880] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-19T09:49:09.96245622Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-db-migrator | kafka | [2024-02-19 09:49:37,880] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-19T09:49:09.96461483Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.157879ms policy-db-migrator | kafka | [2024-02-19 09:49:37,884] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-02-19T09:49:09.968344943Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-db-migrator | > upgrade 0210-sequence.sql kafka | [2024-02-19 09:49:37,884] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-02-19T09:49:09.976243374Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=7.897432ms policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,884] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-02-19T09:49:09.980935006Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-02-19 09:49:37,884] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-02-19T09:49:09.981026617Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=91.621µs policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,884] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-02-19T09:49:09.984215766Z level=info msg="Executing migration" id="create correlation table v1" policy-db-migrator | kafka | [2024-02-19 09:49:37,884] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-19T09:49:09.984938062Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=723.096µs policy-db-migrator | kafka | [2024-02-19 09:49:37,884] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-19T09:49:09.988719236Z level=info msg="Executing migration" id="add index correlations.uid" policy-db-migrator | > upgrade 0220-sequence.sql kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-19T09:49:09.989631834Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=912.138µs policy-db-migrator | -------------- policy-pap | retries = 2147483647 grafana | logger=migrator t=2024-02-19T09:49:09.995535417Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-19T09:49:09.997773068Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.28013ms policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-19T09:49:10.005699499Z level=info msg="Executing migration" id="add correlation config column" policy-db-migrator | kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-19T09:49:10.014879341Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.189892ms policy-db-migrator | kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.021811254Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.022946704Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.1347ms policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.026737908Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-pap | sasl.kerberos.service.name = null kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.027829898Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.09135ms policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.030916115Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-02-19 09:49:37,885] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.06484027Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=33.914675ms policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.072874002Z level=info msg="Executing migration" id="create correlation v2" policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-pap | sasl.login.class = null kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.0737535Z level=info msg="Migration successfully executed" id="create correlation v2" duration=920.788µs policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.076920859Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.07815514Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.234031ms policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-19T09:49:10.081344659Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-02-19T09:49:10.08267136Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.322991ms kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-19T09:49:10.087598795Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-19T09:49:10.088770615Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.17156ms kafka | [2024-02-19 09:49:37,886] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-19T09:49:10.091993365Z level=info msg="Executing migration" id="copy correlation v1 to v2" kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-19T09:49:10.092404918Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=411.193µs kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-19T09:49:10.095922829Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-19T09:49:10.096893618Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=970.959µs policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-19T09:49:10.101141656Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-19T09:49:10.111712132Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.573156ms policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-19T09:49:10.115574976Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-19T09:49:10.116465504Z level=info msg="Migration successfully executed" id="create entity_events table" duration=888.977µs policy-db-migrator | kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-19T09:49:10.121942013Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-19T09:49:10.122982553Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.04488ms policy-db-migrator | > upgrade 0140-toscaparameter.sql kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-19T09:49:10.126498814Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,887] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-19T09:49:10.127116189Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | DROP TABLE IF EXISTS toscaparameter kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-19T09:49:10.129990255Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | security.providers = null grafana | logger=migrator t=2024-02-19T09:49:10.1305197Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-19T09:49:10.133430786Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-db-migrator | kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-19T09:49:10.134372465Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=941.039µs policy-db-migrator | > upgrade 0150-toscaproperty.sql kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-19T09:49:10.138891585Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-02-19T09:49:10.140227318Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.335113ms policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-19T09:49:10.147666175Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-19T09:49:10.149280239Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.614124ms policy-db-migrator | kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-19T09:49:10.156568955Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-19T09:49:10.157882586Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.312371ms policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-02-19 09:49:37,888] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-19T09:49:10.171933652Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-19 09:49:37,889] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.173499907Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.571485ms policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | [2024-02-19 09:49:37,889] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.176355812Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null kafka | [2024-02-19 09:49:37,889] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.177315611Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=959.859µs policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-pap | ssl.keystore.password = null kafka | [2024-02-19 09:49:37,895] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.18058516Z level=info msg="Executing migration" id="Drop public config table" policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS kafka | [2024-02-19 09:49:37,904] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.181250165Z level=info msg="Migration successfully executed" id="Drop public config table" duration=664.885µs policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-19 09:49:37,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.183882879Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-db-migrator | policy-pap | ssl.provider = null kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.18501642Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.133371ms policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.188805124Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.190016275Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.211021ms policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-pap | ssl.truststore.certificates = null kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.193748838Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.194991659Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.242431ms policy-db-migrator | policy-pap | ssl.truststore.password = null kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.198005076Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.199635291Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.629695ms policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-pap | transaction.timeout.ms = 60000 kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.203572726Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-db-migrator | -------------- policy-pap | transactional.id = null kafka | [2024-02-19 09:49:37,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.235673875Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=32.101119ms policy-db-migrator | policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-19 09:49:37,908] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.239895453Z level=info msg="Executing migration" id="add annotations_enabled column" policy-db-migrator | policy-pap | kafka | [2024-02-19 09:49:37,908] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.247289679Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.394716ms policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-pap | [2024-02-19T09:49:37.115+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-02-19 09:49:37,908] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.253776727Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.118+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-02-19 09:49:37,908] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.265360712Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=11.578155ms policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | [2024-02-19T09:49:37.118+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-19 09:49:37,908] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.269493649Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.118+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708336177118 kafka | [2024-02-19 09:49:37,908] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.269735331Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=241.612µs policy-db-migrator | policy-pap | [2024-02-19T09:49:37.118+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=cd388c27-810b-4229-a9ff-3b8814edd721, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-02-19 09:49:37,908] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.272741358Z level=info msg="Executing migration" id="add share column" policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.118+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator kafka | [2024-02-19 09:49:37,908] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.280919732Z level=info msg="Migration successfully executed" id="add share column" duration=8.178574ms policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-pap | [2024-02-19T09:49:37.118+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher kafka | [2024-02-19 09:49:37,909] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.119+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher grafana | logger=migrator t=2024-02-19T09:49:10.284598284Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-02-19 09:49:37,909] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-02-19T09:49:37.120+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers grafana | logger=migrator t=2024-02-19T09:49:10.284796426Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=197.842µs kafka | [2024-02-19 09:49:37,909] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-02-19T09:49:37.121+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers grafana | logger=migrator t=2024-02-19T09:49:10.287667202Z level=info msg="Executing migration" id="create file table" kafka | [2024-02-19 09:49:37,909] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-pap | [2024-02-19T09:49:37.122+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock grafana | logger=migrator t=2024-02-19T09:49:10.288315178Z level=info msg="Migration successfully executed" id="create file table" duration=646.916µs kafka | [2024-02-19 09:49:37,909] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.122+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests grafana | logger=migrator t=2024-02-19T09:49:10.292614066Z level=info msg="Executing migration" id="file table idx: path natural pk" kafka | [2024-02-19 09:49:37,909] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-pap | [2024-02-19T09:49:37.140+00:00|INFO|TimerManager|Thread-9] timer manager update started grafana | logger=migrator t=2024-02-19T09:49:10.295142259Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.527543ms kafka | [2024-02-19 09:49:37,909] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.141+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer grafana | logger=migrator t=2024-02-19T09:49:10.299708071Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" kafka | [2024-02-19 09:49:37,910] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-02-19T09:49:37.146+00:00|INFO|TimerManager|Thread-10] timer manager state-change started grafana | logger=migrator t=2024-02-19T09:49:10.301419285Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.712274ms kafka | [2024-02-19 09:49:37,910] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-02-19T09:49:37.150+00:00|INFO|ServiceManager|main] Policy PAP started grafana | logger=migrator t=2024-02-19T09:49:10.311160333Z level=info msg="Executing migration" id="create file_meta table" kafka | [2024-02-19 09:49:37,910] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0100-upgrade.sql policy-pap | [2024-02-19T09:49:37.151+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.775 seconds (process running for 12.481) grafana | logger=migrator t=2024-02-19T09:49:10.312854708Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.699985ms kafka | [2024-02-19 09:49:37,910] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.585+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-02-19T09:49:10.318236866Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-02-19 09:49:37,910] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-pap | [2024-02-19T09:49:37.585+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Cluster ID: -JbPgT5mQDm7ZrQFfE5t_Q grafana | logger=migrator t=2024-02-19T09:49:10.319517049Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.280143ms kafka | [2024-02-19 09:49:37,910] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-19T09:49:37.585+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: -JbPgT5mQDm7ZrQFfE5t_Q grafana | logger=migrator t=2024-02-19T09:49:10.332319903Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-02-19 09:49:37,911] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-02-19T09:49:37.585+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: -JbPgT5mQDm7ZrQFfE5t_Q grafana | logger=migrator t=2024-02-19T09:49:10.332421414Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=102.861µs kafka | [2024-02-19 09:49:37,911] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | msg policy-pap | [2024-02-19T09:49:37.652+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-02-19T09:49:10.337020896Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" kafka | [2024-02-19 09:49:37,911] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | upgrade to 1100 completed policy-pap | [2024-02-19T09:49:37.653+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: -JbPgT5mQDm7ZrQFfE5t_Q grafana | logger=migrator t=2024-02-19T09:49:10.337163577Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=144.022µs kafka | [2024-02-19 09:49:37,911] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-02-19T09:49:37.697+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-02-19T09:49:10.340393275Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-02-19 09:49:37,911] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-pap | [2024-02-19T09:49:37.709+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 grafana | logger=migrator t=2024-02-19T09:49:10.341354275Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=960.58µs policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,911] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-19T09:49:37.713+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 grafana | logger=migrator t=2024-02-19T09:49:10.349455628Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME kafka | [2024-02-19 09:49:37,912] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-19T09:49:37.778+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-02-19T09:49:10.349820931Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=365.253µs policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,912] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-19T09:49:37.826+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-02-19T09:49:10.354403212Z level=info msg="Executing migration" id="RBAC action name migrator" policy-db-migrator | kafka | [2024-02-19 09:49:37,912] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.355590673Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.188211ms policy-pap | [2024-02-19T09:49:37.894+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-02-19 09:49:37,912] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.35862929Z level=info msg="Executing migration" id="Add UID column to playlist" policy-pap | [2024-02-19T09:49:37.942+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-02-19 09:49:37,912] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.36868715Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.05802ms policy-pap | [2024-02-19T09:49:38.009+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,912] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.371226573Z level=info msg="Executing migration" id="Update uid column values in playlist" policy-pap | [2024-02-19T09:49:38.050+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-02-19 09:49:37,913] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.371341524Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=114.381µs policy-pap | [2024-02-19T09:49:38.115+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,913] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.374018298Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-pap | [2024-02-19T09:49:38.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-02-19 09:49:37,913] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.374801255Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=782.587µs policy-pap | [2024-02-19T09:49:38.224+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,913] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.377228856Z level=info msg="Executing migration" id="update group index for alert rules" policy-pap | [2024-02-19T09:49:38.265+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-02-19 09:49:37,913] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.37755912Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=330.424µs policy-pap | [2024-02-19T09:49:38.329+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,915] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.380297785Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-pap | [2024-02-19T09:49:38.371+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-02-19 09:49:37,915] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.380594427Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=298.602µs policy-pap | [2024-02-19T09:49:38.436+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-02-19 09:49:37,915] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.383487223Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-pap | [2024-02-19T09:49:38.477+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-02-19 09:49:37,915] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.384060138Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=572.455µs policy-pap | [2024-02-19T09:49:38.543+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,919] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.387973783Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-pap | [2024-02-19T09:49:38.585+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-02-19 09:49:37,921] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.39653752Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.562597ms policy-pap | [2024-02-19T09:49:38.658+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,921] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.402233901Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-pap | [2024-02-19T09:49:38.665+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | kafka | [2024-02-19 09:49:37,923] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.410813808Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.579287ms policy-pap | [2024-02-19T09:49:38.695+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,923] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.41982031Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-pap | [2024-02-19T09:49:38.697+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] (Re-)joining group policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-02-19 09:49:37,923] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.420902629Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.081959ms policy-pap | [2024-02-19T09:49:38.705+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-a1f25c60-2000-42dd-8d44-739b6374e145 policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,923] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.431890278Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-pap | [2024-02-19T09:49:38.706+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | kafka | [2024-02-19 09:49:37,923] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.540642575Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=108.753647ms policy-pap | [2024-02-19T09:49:38.706+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | kafka | [2024-02-19 09:49:37,923] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.547029003Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-pap | [2024-02-19T09:49:38.709+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Request joining group due to: need to re-join with the given member-id: consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3-f725ff30-8c0a-45d9-be77-4e795839c775 policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-02-19 09:49:37,923] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.548121162Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.089649ms policy-pap | [2024-02-19T09:49:38.709+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.552094458Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-pap | [2024-02-19T09:49:38.710+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] (Re-)joining group policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.552969096Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=872.538µs policy-pap | [2024-02-19T09:49:41.624+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | -------------- kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.556832701Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-pap | [2024-02-19T09:49:41.625+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.590393213Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=33.559252ms policy-pap | [2024-02-19T09:49:41.627+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.593170518Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-pap | [2024-02-19T09:49:41.736+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-a1f25c60-2000-42dd-8d44-739b6374e145', protocol='range'} kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-02-19T09:49:10.593371939Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=202.232µs policy-pap | [2024-02-19T09:49:41.738+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3-f725ff30-8c0a-45d9-be77-4e795839c775', protocol='range'} kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.596174924Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-pap | [2024-02-19T09:49:41.748+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-a1f25c60-2000-42dd-8d44-739b6374e145=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.596380236Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=205.302µs kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.748+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Finished assignment for group at generation 1: {consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3-f725ff30-8c0a-45d9-be77-4e795839c775=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.599423323Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.784+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-a1f25c60-2000-42dd-8d44-739b6374e145', protocol='range'} policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-02-19T09:49:10.599645315Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=222.412µs kafka | [2024-02-19 09:49:37,924] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.785+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.603422749Z level=info msg="Executing migration" id="create folder table" kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.785+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3-f725ff30-8c0a-45d9-be77-4e795839c775', protocol='range'} policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.604305308Z level=info msg="Migration successfully executed" id="create folder table" duration=882.319µs kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.786+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.608439704Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.791+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-02-19T09:49:10.610327102Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.885688ms kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.791+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.616051193Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.815+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-02-19T09:49:10.617210423Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.16056ms kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.815+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.623947804Z level=info msg="Executing migration" id="Update folder title length" kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.836+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.624008975Z level=info msg="Migration successfully executed" id="Update folder title length" duration=62.041µs kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:41.837+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3, groupId=5cceb518-7b72-41da-b42c-3c8775105be3] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.627482365Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" kafka | [2024-02-19 09:49:37,925] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:58.813+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-db-migrator | DROP TABLE pdpstatistics grafana | logger=migrator t=2024-02-19T09:49:10.629898627Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.415712ms kafka | [2024-02-19 09:49:37,926] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.633322718Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-02-19 09:49:37,926] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:58.815+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.634388087Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.065089ms kafka | [2024-02-19 09:49:37,926] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2632223f-979c-46ca-b8b2-6772ebd25f34","timestampMs":1708336198772,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.638513495Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" kafka | [2024-02-19 09:49:37,926] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-19T09:49:58.814+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-19T09:49:10.639635494Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.122129ms kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2632223f-979c-46ca-b8b2-6772ebd25f34","timestampMs":1708336198772,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.644159706Z level=info msg="Executing migration" id="Sync dashboard and folder table" policy-pap | [2024-02-19T09:49:58.826+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats grafana | logger=migrator t=2024-02-19T09:49:10.644586779Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=427.113µs policy-pap | [2024-02-19T09:49:58.916+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate starting kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.649364092Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-pap | [2024-02-19T09:49:58.917+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate starting listener kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.649823246Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=459.004µs kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.653806232Z level=info msg="Executing migration" id="create anon_device table" policy-pap | [2024-02-19T09:49:58.917+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate starting timer kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0120-statistics_sequence.sql grafana | logger=migrator t=2024-02-19T09:49:10.655188265Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.380683ms policy-pap | [2024-02-19T09:49:58.918+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=102dc6a6-6858-4db0-95fe-a28908d45b01, expireMs=1708336228918] kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.659493393Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-pap | [2024-02-19T09:49:58.920+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate starting enqueue kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-02-19T09:49:10.661846634Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.354801ms policy-pap | [2024-02-19T09:49:58.920+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=102dc6a6-6858-4db0-95fe-a28908d45b01, expireMs=1708336228918] kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-19T09:49:10.671890625Z level=info msg="Executing migration" id="add index anon_device.updated_at" policy-pap | [2024-02-19T09:49:58.920+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate started kafka | [2024-02-19 09:49:37,927] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-19T09:49:10.673002164Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.109469ms policy-pap | [2024-02-19T09:49:58.922+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-19 09:49:37,928] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-02-19T09:49:10.67704267Z level=info msg="Executing migration" id="create signing_key table" policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"102dc6a6-6858-4db0-95fe-a28908d45b01","timestampMs":1708336198898,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-19 09:49:37,928] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | name version grafana | logger=migrator t=2024-02-19T09:49:10.677871538Z level=info msg="Migration successfully executed" id="create signing_key table" duration=828.938µs policy-pap | [2024-02-19T09:49:58.959+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-19 09:49:37,928] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policyadmin 1300 grafana | logger=migrator t=2024-02-19T09:49:10.680910425Z level=info msg="Executing migration" id="add unique index signing_key.key_id" policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"102dc6a6-6858-4db0-95fe-a28908d45b01","timestampMs":1708336198898,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-19 09:49:37,928] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ID script operation from_version to_version tag success atTime grafana | logger=migrator t=2024-02-19T09:49:10.682697861Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.787346ms policy-pap | [2024-02-19T09:49:58.960+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-02-19 09:49:37,929] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:04 grafana | logger=migrator t=2024-02-19T09:49:10.687740447Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-pap | [2024-02-19T09:49:58.962+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-19 09:49:37,929] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:04 grafana | logger=migrator t=2024-02-19T09:49:10.688946318Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.204152ms policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"102dc6a6-6858-4db0-95fe-a28908d45b01","timestampMs":1708336198898,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-19 09:49:37,929] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:04 grafana | logger=migrator t=2024-02-19T09:49:10.692832272Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-pap | [2024-02-19T09:49:58.962+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-02-19 09:49:37,929] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:04 grafana | logger=migrator t=2024-02-19T09:49:10.693143185Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=312.323µs policy-pap | [2024-02-19T09:49:58.981+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-19 09:49:37,929] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:04 grafana | logger=migrator t=2024-02-19T09:49:10.69701202Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4b869b48-595d-46b2-866d-f92ac2000e9f","timestampMs":1708336198966,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:04 grafana | logger=migrator t=2024-02-19T09:49:10.710331049Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.274769ms policy-pap | [2024-02-19T09:49:58.983+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:04 grafana | logger=migrator t=2024-02-19T09:49:10.748589023Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4b869b48-595d-46b2-866d-f92ac2000e9f","timestampMs":1708336198966,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup"} kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:04 grafana | logger=migrator t=2024-02-19T09:49:10.749904095Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.323022ms policy-pap | [2024-02-19T09:49:58.984+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 grafana | logger=migrator t=2024-02-19T09:49:10.758884976Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" policy-pap | [2024-02-19T09:49:58.990+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 grafana | logger=migrator t=2024-02-19T09:49:10.76043926Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.552494ms kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"102dc6a6-6858-4db0-95fe-a28908d45b01","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"2a703d74-e531-4901-bb06-fdd53ef492c4","timestampMs":1708336198968,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 grafana | logger=migrator t=2024-02-19T09:49:10.771888593Z level=info msg="Executing migration" id="create sso_setting table" policy-pap | [2024-02-19T09:49:59.007+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopping policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.774126943Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.23887ms policy-pap | [2024-02-19T09:49:59.008+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopping enqueue policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.781278338Z level=info msg="Executing migration" id="copy kvstore migration status to each org" policy-pap | [2024-02-19T09:49:59.008+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopping timer policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.782051664Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=775.016µs policy-pap | [2024-02-19T09:49:59.008+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=102dc6a6-6858-4db0-95fe-a28908d45b01, expireMs=1708336228918] policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.786005019Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-pap | [2024-02-19T09:49:59.009+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopping listener policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.786370473Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=366.664µs policy-pap | [2024-02-19T09:49:59.009+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopped policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-02-19T09:49:10.790236048Z level=info msg="migrations completed" performed=526 skipped=0 duration=3.973876951s policy-pap | [2024-02-19T09:49:59.013+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=sqlstore t=2024-02-19T09:49:10.800836553Z level=info msg="Created default admin" user=admin policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"102dc6a6-6858-4db0-95fe-a28908d45b01","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"2a703d74-e531-4901-bb06-fdd53ef492c4","timestampMs":1708336198968,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=sqlstore t=2024-02-19T09:49:10.801206137Z level=info msg="Created default organization" policy-pap | [2024-02-19T09:49:59.014+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 102dc6a6-6858-4db0-95fe-a28908d45b01 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=secrets t=2024-02-19T09:49:10.808981936Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 policy-pap | [2024-02-19T09:49:59.019+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate successful policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=plugin.store t=2024-02-19T09:49:10.825315602Z level=info msg="Loading plugins..." policy-pap | [2024-02-19T09:49:59.019+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c start publishing next request policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | [2024-02-19T09:49:59.019+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange starting grafana | logger=local.finder t=2024-02-19T09:49:10.864464134Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-02-19T09:49:59.020+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange starting listener grafana | logger=plugin.store t=2024-02-19T09:49:10.864525725Z level=info msg="Plugins loaded" count=55 duration=39.211003ms policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | [2024-02-19T09:49:59.020+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange starting timer grafana | logger=query_data t=2024-02-19T09:49:10.866896216Z level=info msg="Query Service initialization" policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-02-19T09:49:59.020+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=79d938ec-9d1c-4da5-9307-1df889903ca3, expireMs=1708336229020] grafana | logger=live.push_http t=2024-02-19T09:49:10.870218646Z level=info msg="Live Push Gateway initialization" policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-02-19T09:49:59.020+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=79d938ec-9d1c-4da5-9307-1df889903ca3, expireMs=1708336229020] grafana | logger=ngalert.migration t=2024-02-19T09:49:10.875376512Z level=info msg=Starting policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-02-19T09:49:59.020+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange starting enqueue grafana | logger=ngalert.migration orgID=1 t=2024-02-19T09:49:10.876080349Z level=info msg="Migrating alerts for organisation" policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | [2024-02-19T09:49:59.021+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange started grafana | logger=ngalert.migration orgID=1 t=2024-02-19T09:49:10.876691655Z level=info msg="Alerts found to migrate" alerts=0 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | [2024-02-19T09:49:59.022+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-19T09:49:10.878159377Z level=info msg="Completed legacy migration" kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"79d938ec-9d1c-4da5-9307-1df889903ca3","timestampMs":1708336198899,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=infra.usagestats.collector t=2024-02-19T09:49:10.905738615Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:05 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | [2024-02-19T09:49:59.032+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=provisioning.datasources t=2024-02-19T09:49:10.908346579Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"79d938ec-9d1c-4da5-9307-1df889903ca3","timestampMs":1708336198899,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=provisioning.alerting t=2024-02-19T09:49:10.926489622Z level=info msg="starting to provision alerting" policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) grafana | logger=provisioning.alerting t=2024-02-19T09:49:10.926528303Z level=info msg="finished to provision alerting" policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.032+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) grafana | logger=grafanaStorageLogger t=2024-02-19T09:49:10.928375479Z level=info msg="Storage starting" policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.048+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) grafana | logger=ngalert.multiorg.alertmanager t=2024-02-19T09:49:10.928423299Z level=info msg="Starting MultiOrg Alertmanager" policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"79d938ec-9d1c-4da5-9307-1df889903ca3","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"cae7ba32-f300-4525-a0b4-4f77bf4189d3","timestampMs":1708336199035,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) grafana | logger=http.server t=2024-02-19T09:49:10.932301455Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.049+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 79d938ec-9d1c-4da5-9307-1df889903ca3 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) grafana | logger=grafana-apiserver t=2024-02-19T09:49:10.937873454Z level=info msg="Authentication is disabled" policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.061+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) grafana | logger=ngalert.state.manager t=2024-02-19T09:49:10.928585201Z level=info msg="Warming state cache for startup" policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"79d938ec-9d1c-4da5-9307-1df889903ca3","timestampMs":1708336198899,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) grafana | logger=grafana-apiserver t=2024-02-19T09:49:10.94525202Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.061+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=ngalert.state.manager t=2024-02-19T09:49:11.016486361Z level=info msg="State cache has been initialized" states=0 duration=89.509205ms policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) grafana | logger=ngalert.scheduler t=2024-02-19T09:49:11.016566032Z level=info msg="Starting scheduler" tickInterval=10s policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"79d938ec-9d1c-4da5-9307-1df889903ca3","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"cae7ba32-f300-4525-a0b4-4f77bf4189d3","timestampMs":1708336199035,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) grafana | logger=ticker t=2024-02-19T09:49:11.016683913Z level=info msg=starting first_tick=2024-02-19T09:49:20Z policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange stopping kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=plugins.update.checker t=2024-02-19T09:49:11.0298816Z level=info msg="Update check succeeded" duration=103.079095ms policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange stopping enqueue kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=grafana.update.checker t=2024-02-19T09:49:11.109448586Z level=info msg="Update check succeeded" duration=181.637092ms policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange stopping timer kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=infra.usagestats t=2024-02-19T09:50:05.940733666Z level=info msg="Usage stats are ready to report" policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=79d938ec-9d1c-4da5-9307-1df889903ca3, expireMs=1708336229020] kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange stopping listener kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange stopped policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpStateChange successful policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c start publishing next request policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:06 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-pap | [2024-02-19T09:49:59.065+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate starting policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | [2024-02-19T09:49:59.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate starting listener policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | [2024-02-19T09:49:59.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate starting timer policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-02-19T09:49:59.066+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=27c77e08-86d8-4d6c-a377-69c796e75a58, expireMs=1708336229066] policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-02-19T09:49:59.066+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate starting enqueue policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-pap | [2024-02-19T09:49:59.066+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:37,970] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"27c77e08-86d8-4d6c-a377-69c796e75a58","timestampMs":1708336199051,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:37,972] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-pap | [2024-02-19T09:49:59.067+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate started policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:37,972] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) policy-pap | [2024-02-19T09:49:59.075+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,030] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"27c77e08-86d8-4d6c-a377-69c796e75a58","timestampMs":1708336199051,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,042] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-19T09:49:59.075+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,045] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:49:59.078+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,046] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"source":"pap-89db414e-61b1-454e-b88b-59220abcdad7","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"27c77e08-86d8-4d6c-a377-69c796e75a58","timestampMs":1708336199051,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,048] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-19T09:49:59.078+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,066] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-19T09:49:59.089+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,066] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"27c77e08-86d8-4d6c-a377-69c796e75a58","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"18578a30-f6a6-427e-92fb-ae0f1664710f","timestampMs":1708336199080,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,067] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:49:59.090+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 27c77e08-86d8-4d6c-a377-69c796e75a58 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,067] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:49:59.090+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,067] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"27c77e08-86d8-4d6c-a377-69c796e75a58","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"18578a30-f6a6-427e-92fb-ae0f1664710f","timestampMs":1708336199080,"name":"apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,074] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-19T09:49:59.091+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopping policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,075] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-19T09:49:59.091+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopping enqueue policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,075] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:49:59.091+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopping timer policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,075] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:49:59.091+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=27c77e08-86d8-4d6c-a377-69c796e75a58, expireMs=1708336229066] policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:07 kafka | [2024-02-19 09:49:38,075] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-19T09:49:59.091+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopping listener policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 kafka | [2024-02-19 09:49:38,085] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-19T09:49:59.091+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate stopped policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 kafka | [2024-02-19 09:49:38,085] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-19T09:49:59.099+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c PdpUpdate successful policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 kafka | [2024-02-19 09:49:38,085] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:49:59.099+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b5837cbd-4b55-4c0d-8cfd-2fce27f13c4c has no more requests policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 kafka | [2024-02-19 09:49:38,085] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:50:07.518+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:07.526+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-02-19 09:49:38,086] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:07.923+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup kafka | [2024-02-19 09:49:38,095] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:08.469+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup kafka | [2024-02-19 09:49:38,095] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:08.470+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup kafka | [2024-02-19 09:49:38,095] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:09.015+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-02-19 09:49:38,095] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:09.260+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-02-19 09:49:38,095] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:09.348+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-02-19 09:49:38,104] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:09.348+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup kafka | [2024-02-19 09:49:38,104] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:09.349+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup kafka | [2024-02-19 09:49:38,104] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:09.364+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-19T09:50:09Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-19T09:50:09Z, user=policyadmin)] kafka | [2024-02-19 09:49:38,104] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:10.137+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup kafka | [2024-02-19 09:49:38,104] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:10.139+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-02-19 09:49:38,111] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:10.139+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 kafka | [2024-02-19 09:49:38,112] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:08 policy-pap | [2024-02-19T09:50:10.139+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup kafka | [2024-02-19 09:49:38,112] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:09 policy-pap | [2024-02-19T09:50:10.140+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup kafka | [2024-02-19 09:49:38,112] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:09 policy-pap | [2024-02-19T09:50:10.154+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-19T09:50:10Z, user=policyadmin)] kafka | [2024-02-19 09:49:38,112] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:09 policy-pap | [2024-02-19T09:50:10.536+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup kafka | [2024-02-19 09:49:38,122] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:09 policy-pap | [2024-02-19T09:50:10.537+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup kafka | [2024-02-19 09:49:38,122] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1902240949040800u 1 2024-02-19 09:49:09 policy-pap | [2024-02-19T09:50:10.537+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 kafka | [2024-02-19 09:49:38,122] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 policy-pap | [2024-02-19T09:50:10.537+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-02-19 09:49:38,122] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,122] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-19T09:50:10.537+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,130] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-19T09:50:10.537+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,130] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-19T09:50:10.548+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-19T09:50:10Z, user=policyadmin)] policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,130] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:50:28.919+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=102dc6a6-6858-4db0-95fe-a28908d45b01, expireMs=1708336228918] policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,131] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-19T09:50:29.021+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=79d938ec-9d1c-4da5-9307-1df889903ca3, expireMs=1708336229020] policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,131] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-19T09:50:31.134+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,138] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-19T09:50:31.137+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,138] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-19T09:51:37.143+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,138] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,138] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,139] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1902240949040900u 1 2024-02-19 09:49:09 kafka | [2024-02-19 09:49:38,149] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,150] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,150] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,150] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,150] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,159] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,160] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,160] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,160] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1902240949041000u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,160] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1902240949041100u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,167] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1902240949041200u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,168] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1902240949041200u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,168] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1902240949041200u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,168] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1902240949041200u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,168] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1902240949041300u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,173] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1902240949041300u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,174] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1902240949041300u 1 2024-02-19 09:49:10 kafka | [2024-02-19 09:49:38,174] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-02-19 09:49:38,174] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,174] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,181] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,182] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,182] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,182] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,182] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,189] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,189] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,190] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,190] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,190] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,197] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,197] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,197] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,197] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,197] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,204] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,205] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,205] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,205] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,205] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,213] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,214] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,214] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,214] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,214] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,226] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,227] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,227] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,227] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,227] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,242] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,243] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,243] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,243] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,243] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,250] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,250] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,250] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,251] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,251] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,260] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,261] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,261] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,261] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,261] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,271] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,272] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,272] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,272] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,272] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,280] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,281] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,281] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,281] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,281] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,288] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,289] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,289] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,289] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,289] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,296] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,297] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,297] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,297] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,297] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,305] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,305] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,305] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,305] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,305] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,311] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,312] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,312] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,312] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,312] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,319] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,320] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,320] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,320] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,320] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,358] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,359] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,359] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,359] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,359] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,374] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,374] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,374] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,374] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,374] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,384] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,384] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,385] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,385] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,385] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,392] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,393] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,393] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,393] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,394] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,401] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,402] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,402] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,402] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,402] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(veTrfW6FRamNLDmnsQwlbQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,409] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,409] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,409] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,410] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,410] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,416] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,417] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,417] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,417] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,417] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,425] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,425] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,425] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,426] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,426] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,435] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,435] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,435] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,436] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,436] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,445] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,446] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,446] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,446] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,446] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,451] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,452] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,452] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,452] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,452] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,460] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,461] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,462] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,462] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,462] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,470] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,471] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,471] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,471] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,472] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,483] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,484] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,484] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,484] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,484] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,491] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,491] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,491] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,491] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,491] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,503] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,504] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,505] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,505] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,505] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,514] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,515] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,515] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,515] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,516] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,525] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,526] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,526] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,526] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,526] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,534] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,535] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,535] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,535] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,535] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,545] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,546] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,546] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,546] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,546] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,554] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-19 09:49:38,554] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-19 09:49:38,554] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,554] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-19 09:49:38,555] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(sGJ4pqvYRN2F_jW5d81sYw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-02-19 09:49:38,561] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-02-19 09:49:38,574] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,576] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,579] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,579] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,580] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,580] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,582] INFO [Broker id=1] Finished LeaderAndIsr request in 666ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-02-19 09:49:38,588] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=sGJ4pqvYRN2F_jW5d81sYw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=veTrfW6FRamNLDmnsQwlbQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-19 09:49:38,590] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 12 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,592] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,592] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,592] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,592] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,592] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,593] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,593] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,594] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,594] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,594] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,594] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,594] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,595] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,595] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,595] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,595] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,595] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,596] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,596] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,596] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,596] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,597] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,597] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,597] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,597] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,598] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,599] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,599] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,599] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,599] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,600] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,600] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 21 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,601] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,601] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,602] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,601] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,602] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,602] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,602] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,602] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,602] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,602] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,602] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,602] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,603] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,603] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 24 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,603] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,603] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,603] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,603] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,603] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,603] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,603] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,603] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,604] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,604] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,605] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,605] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,605] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,605] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,605] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,605] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,605] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,605] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,606] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,606] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,606] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 26 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,606] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,607] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,607] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,607] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 27 milliseconds for epoch 0, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,607] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,608] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,608] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,608] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,608] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,608] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,608] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,608] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 28 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,608] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,608] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,608] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,609] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 29 milliseconds for epoch 0, of which 28 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,609] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,609] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,609] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,609] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,609] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,609] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 29 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,610] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 30 milliseconds for epoch 0, of which 29 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,610] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 30 milliseconds for epoch 0, of which 30 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,610] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-19 09:49:38,610] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 30 milliseconds for epoch 0, of which 30 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-19 09:49:38,611] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-19 09:49:38,688] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-a1f25c60-2000-42dd-8d44-739b6374e145 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,707] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5cceb518-7b72-41da-b42c-3c8775105be3 in Empty state. Created a new member id consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3-f725ff30-8c0a-45d9-be77-4e795839c775 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,718] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-a1f25c60-2000-42dd-8d44-739b6374e145 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:38,718] INFO [GroupCoordinator 1]: Preparing to rebalance group 5cceb518-7b72-41da-b42c-3c8775105be3 in state PreparingRebalance with old generation 0 (__consumer_offsets-42) (reason: Adding new member consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3-f725ff30-8c0a-45d9-be77-4e795839c775 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:39,148] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 806dde56-8d7b-4023-b37b-d9545bfe5732 in Empty state. Created a new member id consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2-5e8229a3-40aa-439c-bc1c-c983f4eb0b77 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:39,155] INFO [GroupCoordinator 1]: Preparing to rebalance group 806dde56-8d7b-4023-b37b-d9545bfe5732 in state PreparingRebalance with old generation 0 (__consumer_offsets-21) (reason: Adding new member consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2-5e8229a3-40aa-439c-bc1c-c983f4eb0b77 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:41,732] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:41,737] INFO [GroupCoordinator 1]: Stabilized group 5cceb518-7b72-41da-b42c-3c8775105be3 generation 1 (__consumer_offsets-42) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:41,759] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-a1f25c60-2000-42dd-8d44-739b6374e145 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:41,759] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5cceb518-7b72-41da-b42c-3c8775105be3-3-f725ff30-8c0a-45d9-be77-4e795839c775 for group 5cceb518-7b72-41da-b42c-3c8775105be3 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:42,156] INFO [GroupCoordinator 1]: Stabilized group 806dde56-8d7b-4023-b37b-d9545bfe5732 generation 1 (__consumer_offsets-21) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-19 09:49:42,174] INFO [GroupCoordinator 1]: Assignment received from leader consumer-806dde56-8d7b-4023-b37b-d9545bfe5732-2-5e8229a3-40aa-439c-bc1c-c983f4eb0b77 for group 806dde56-8d7b-4023-b37b-d9545bfe5732 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping kafka ... Stopping grafana ... Stopping policy-api ... Stopping compose_zookeeper_1 ... Stopping mariadb ... Stopping simulator ... Stopping prometheus ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing kafka ... Removing grafana ... Removing policy-api ... Removing policy-db-migrator ... Removing compose_zookeeper_1 ... Removing mariadb ... Removing simulator ... Removing prometheus ... Removing simulator ... done Removing policy-db-migrator ... done Removing mariadb ... done Removing prometheus ... done Removing policy-apex-pdp ... done Removing grafana ... done Removing policy-api ... done Removing kafka ... done Removing compose_zookeeper_1 ... done Removing policy-pap ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.l4Kz55UmBK ]] + rsync -av /tmp/tmp.l4Kz55UmBK/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 911,287 bytes received 95 bytes 1,822,764.00 bytes/sec total size is 910,741 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2135 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6224901221944701805.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4102950680482607825.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14313743289246040213.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pit5 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-pit5/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16628844314642573179.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config9858982419178937314tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11364149369712374595.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8048815866323840826.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pit5 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-pit5/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16374201217374136762.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12506582733151318942.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pit5 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-pit5/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins11145712394181388516.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-pit5 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.5.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-pit5/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1581 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-6650 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 835 25339 0 5992 30876 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:a3:4a:96 brd ff:ff:ff:ff:ff:ff inet 10.30.106.216/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85934sec preferred_lft 85934sec inet6 fe80::f816:3eff:fea3:4a96/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:93:8d:ad:88 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-6650) 02/19/24 _x86_64_ (8 CPU) 09:45:13 LINUX RESTART (8 CPU) 09:46:02 tps rtps wtps bread/s bwrtn/s 09:47:01 101.98 14.51 87.48 1010.68 28369.56 09:48:01 143.16 23.23 119.93 2801.13 35344.78 09:49:01 301.05 5.70 295.35 511.75 140388.27 09:50:01 238.67 6.95 231.72 300.60 28167.48 09:51:01 17.15 0.00 17.15 0.00 20659.97 09:52:01 58.41 0.00 58.41 0.00 22393.37 Average: 143.52 8.38 135.14 770.01 45935.53 09:46:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 09:47:01 30085240 31713060 2853980 8.66 72224 1864056 1459204 4.29 860440 1696732 124844 09:48:01 28157696 31667396 4781524 14.52 108968 3645040 1567460 4.61 992708 3383744 1525484 09:49:01 25646152 31347956 7293068 22.14 142104 5690492 4508900 13.27 1366244 5377300 640 09:50:01 23663260 29483972 9275960 28.16 155384 5780780 8929220 26.27 3382536 5294788 328 09:51:01 23709628 29530912 9229592 28.02 155612 5780988 8870924 26.10 3337692 5292560 240 09:52:01 25252112 31091120 7687108 23.34 156472 5807884 2424716 7.13 1850876 5287848 68 Average: 26085681 30805736 6853539 20.81 131794 4761540 4626737 13.61 1965083 4388829 275267 09:46:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:47:01 lo 1.49 1.49 0.16 0.16 0.00 0.00 0.00 0.00 09:47:01 ens3 62.80 40.33 950.23 9.50 0.00 0.00 0.00 0.00 09:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:48:01 lo 7.87 7.87 0.73 0.73 0.00 0.00 0.00 0.00 09:48:01 br-f44fd4dc2201 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:48:01 ens3 352.27 213.33 10492.61 19.95 0.00 0.00 0.00 0.00 09:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:49:01 vethbfdb283 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:49:01 lo 5.47 5.47 0.56 0.56 0.00 0.00 0.00 0.00 09:49:01 vethb7394f1 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 09:49:01 br-f44fd4dc2201 0.03 0.12 0.00 0.01 0.00 0.00 0.00 0.00 09:50:01 lo 2.78 2.78 2.44 2.44 0.00 0.00 0.00 0.00 09:50:01 vethb7394f1 0.00 0.35 0.00 0.02 0.00 0.00 0.00 0.00 09:50:01 br-f44fd4dc2201 1.45 1.30 0.90 1.80 0.00 0.00 0.00 0.00 09:50:01 ens3 1508.28 889.97 32681.27 118.49 0.00 0.00 0.00 0.00 09:51:01 lo 6.03 6.03 1.37 1.37 0.00 0.00 0.00 0.00 09:51:01 vethb7394f1 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 09:51:01 br-f44fd4dc2201 1.60 1.80 1.00 0.24 0.00 0.00 0.00 0.00 09:51:01 ens3 2.67 2.23 0.54 0.64 0.00 0.00 0.00 0.00 09:52:01 lo 6.97 6.97 0.56 0.56 0.00 0.00 0.00 0.00 09:52:01 br-f44fd4dc2201 1.23 1.47 0.10 0.14 0.00 0.00 0.00 0.00 09:52:01 ens3 19.21 16.61 7.24 17.26 0.00 0.00 0.00 0.00 09:52:01 veth500f700 54.01 48.38 20.46 40.50 0.00 0.00 0.00 0.00 Average: lo 5.11 5.11 0.97 0.97 0.00 0.00 0.00 0.00 Average: br-f44fd4dc2201 0.72 0.78 0.33 0.37 0.00 0.00 0.00 0.00 Average: ens3 197.52 115.12 5316.80 13.35 0.00 0.00 0.00 0.00 Average: veth500f700 9.03 8.08 3.42 6.77 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-6650) 02/19/24 _x86_64_ (8 CPU) 09:45:13 LINUX RESTART (8 CPU) 09:46:02 CPU %user %nice %system %iowait %steal %idle 09:47:01 all 9.80 0.00 0.67 3.21 0.03 86.29 09:47:01 0 9.78 0.00 0.73 0.08 0.03 89.37 09:47:01 1 5.02 0.00 0.39 5.04 0.00 89.55 09:47:01 2 2.07 0.00 0.19 0.05 0.02 97.68 09:47:01 3 0.42 0.00 0.22 0.59 0.00 98.76 09:47:01 4 1.44 0.00 0.37 0.27 0.02 97.90 09:47:01 5 24.91 0.00 1.41 9.03 0.07 64.58 09:47:01 6 26.09 0.00 1.33 2.13 0.07 70.38 09:47:01 7 8.72 0.00 0.73 8.50 0.03 82.02 09:48:01 all 11.83 0.00 2.49 2.37 0.05 83.27 09:48:01 0 8.07 0.00 2.61 2.86 0.07 86.38 09:48:01 1 6.43 0.00 2.58 1.31 0.03 89.65 09:48:01 2 5.62 0.00 2.26 0.35 0.03 91.74 09:48:01 3 13.27 0.00 2.42 0.71 0.05 83.55 09:48:01 4 8.78 0.00 2.35 0.72 0.03 88.12 09:48:01 5 20.90 0.00 2.19 11.94 0.05 64.93 09:48:01 6 19.68 0.00 3.13 0.61 0.07 76.51 09:48:01 7 11.92 0.00 2.38 0.47 0.03 85.19 09:49:01 all 10.27 0.00 4.96 8.13 0.06 76.58 09:49:01 0 9.19 0.00 5.30 21.68 0.07 63.76 09:49:01 1 8.90 0.00 6.16 18.08 0.07 66.79 09:49:01 2 10.64 0.00 5.08 1.86 0.05 82.36 09:49:01 3 12.86 0.00 4.04 3.38 0.07 79.66 09:49:01 4 9.30 0.00 5.16 1.29 0.05 84.20 09:49:01 5 10.86 0.00 4.97 15.95 0.07 68.16 09:49:01 6 10.70 0.00 4.16 0.19 0.07 84.89 09:49:01 7 9.69 0.00 4.77 2.73 0.08 82.73 09:50:01 all 29.61 0.00 3.63 1.81 0.09 64.86 09:50:01 0 26.77 0.00 3.34 0.75 0.08 69.05 09:50:01 1 33.90 0.00 3.88 1.14 0.10 60.97 09:50:01 2 27.09 0.00 3.43 1.41 0.08 67.98 09:50:01 3 28.53 0.00 3.66 6.32 0.08 61.40 09:50:01 4 31.29 0.00 3.86 0.20 0.08 64.56 09:50:01 5 25.62 0.00 3.37 2.58 0.08 68.34 09:50:01 6 33.05 0.00 3.74 0.30 0.08 62.83 09:50:01 7 30.70 0.00 3.71 1.79 0.08 63.72 09:51:01 all 4.17 0.00 0.43 0.97 0.04 94.39 09:51:01 0 2.99 0.00 0.48 0.05 0.05 96.43 09:51:01 1 3.82 0.00 0.38 0.05 0.05 95.69 09:51:01 2 5.32 0.00 0.50 0.02 0.03 94.13 09:51:01 3 4.61 0.00 0.37 7.51 0.03 87.48 09:51:01 4 4.34 0.00 0.47 0.00 0.03 95.15 09:51:01 5 4.29 0.00 0.48 0.00 0.05 95.17 09:51:01 6 4.21 0.00 0.47 0.05 0.05 95.22 09:51:01 7 3.73 0.00 0.28 0.05 0.03 95.90 09:52:01 all 1.57 0.00 0.56 1.23 0.04 96.61 09:52:01 0 2.14 0.00 0.62 0.17 0.07 97.00 09:52:01 1 1.25 0.00 0.60 0.12 0.03 98.00 09:52:01 2 1.40 0.00 0.65 0.02 0.03 97.90 09:52:01 3 1.84 0.00 0.65 8.51 0.05 88.94 09:52:01 4 1.34 0.00 0.47 0.05 0.05 98.09 09:52:01 5 1.44 0.00 0.52 0.33 0.03 97.68 09:52:01 6 1.32 0.00 0.50 0.58 0.02 97.58 09:52:01 7 1.85 0.00 0.50 0.05 0.03 97.57 Average: all 11.20 0.00 2.12 2.94 0.05 83.69 Average: 0 9.83 0.00 2.17 4.22 0.06 83.72 Average: 1 9.88 0.00 2.32 4.25 0.05 83.50 Average: 2 8.69 0.00 2.02 0.62 0.04 88.63 Average: 3 10.26 0.00 1.89 4.52 0.05 83.29 Average: 4 9.42 0.00 2.11 0.42 0.04 88.00 Average: 5 14.64 0.00 2.15 6.61 0.06 76.54 Average: 6 15.79 0.00 2.22 0.64 0.06 81.29 Average: 7 11.11 0.00 2.06 2.25 0.05 84.53