Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-7144 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-WEryV8WBBco4/agent.2156 SSH_AGENT_PID=2158 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_5914452624802915947.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_5914452624802915947.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision dd836dc2d2bd379fba19b395c912d32f1bc7ee38 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=30 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" > git rev-list --no-walk dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6314904179455017065.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-G64c lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-G64c/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-G64c/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.2 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.46 botocore==1.34.46 bs4==0.0.2 cachetools==5.3.2 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.1 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.42 google-auth==2.28.0 httplib2==0.22.0 identify==2.5.35 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.5 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.2.0 prettytable==3.10.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.2.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.1 python-swiftclient==4.4.0 pytz==2024.1 PyYAML==6.0.1 referencing==0.33.0 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.2 typing_extensions==4.9.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins15404237512665771173.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins9556120024277205967.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.c6cJnFSE9h ++ echo ROBOT_VENV=/tmp/tmp.c6cJnFSE9h +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.c6cJnFSE9h ++ source /tmp/tmp.c6cJnFSE9h/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.c6cJnFSE9h +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.c6cJnFSE9h/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.c6cJnFSE9h) ' '!=' x ']' +++ PS1='(tmp.c6cJnFSE9h) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.c6cJnFSE9h/src/onap ++ rm -rf /tmp/tmp.c6cJnFSE9h/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.c6cJnFSE9h/bin/activate + '[' -z /tmp/tmp.c6cJnFSE9h/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.c6cJnFSE9h/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.c6cJnFSE9h ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.c6cJnFSE9h/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.c6cJnFSE9h) ' ++ '[' 'x(tmp.c6cJnFSE9h) ' '!=' x ']' ++ PS1='(tmp.c6cJnFSE9h) (tmp.c6cJnFSE9h) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.pn8iy6Kpoj + cd /tmp/tmp.pn8iy6Kpoj + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:296577cad1791ddae720c19e5a96c4f6dfea1eb6f9a0aba78ec9d1ac886fa3a4 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:d2876ccda69cc445de980a3d4765cb553f81049d67cc6056cfa9e5429597baa6 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:78a40fb24ed4d3cee4ce259c77b5dd4ea7c5808a9213d88dd227e26e4f302016 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:1999687a3a7904992c4686afb8b854bbc7221d3c1a80889c66ccaff2973b9dd9 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating simulator ... Creating mariadb ... Creating compose_zookeeper_1 ... Creating prometheus ... Creating simulator ... done Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating policy-api ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds kafka Up 11 seconds grafana Up 14 seconds policy-api Up 12 seconds compose_zookeeper_1 Up 13 seconds prometheus Up 15 seconds mariadb Up 17 seconds simulator Up 18 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds kafka Up 17 seconds grafana Up 19 seconds policy-api Up 17 seconds compose_zookeeper_1 Up 18 seconds prometheus Up 20 seconds mariadb Up 22 seconds simulator Up 23 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds kafka Up 22 seconds grafana Up 24 seconds policy-api Up 22 seconds compose_zookeeper_1 Up 23 seconds prometheus Up 25 seconds mariadb Up 27 seconds simulator Up 28 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds kafka Up 27 seconds grafana Up 29 seconds policy-api Up 27 seconds compose_zookeeper_1 Up 28 seconds prometheus Up 30 seconds mariadb Up 32 seconds simulator Up 33 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds grafana Up 34 seconds policy-api Up 32 seconds compose_zookeeper_1 Up 33 seconds prometheus Up 35 seconds mariadb Up 37 seconds simulator Up 38 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ sed 's/./& /g' ++ echo hxB + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:15:18 up 5 min, 0 users, load average: 2.72, 1.30, 0.53 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 12.1 us, 2.4 sy, 0.0 ni, 80.5 id, 5.0 wa, 0.0 hi, 0.0 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.0G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds grafana Up 34 seconds policy-api Up 32 seconds compose_zookeeper_1 Up 33 seconds prometheus Up 35 seconds mariadb Up 37 seconds simulator Up 38 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 24f93304139d policy-apex-pdp 1.97% 200.4MiB / 31.41GiB 0.62% 7.15kB / 6.77kB 0B / 0B 48 3c4c7215af5e policy-pap 3.14% 485.2MiB / 31.41GiB 1.51% 28.1kB / 29.8kB 0B / 153MB 61 22286465a78e kafka 48.63% 393.3MiB / 31.41GiB 1.22% 70.8kB / 73.3kB 0B / 500kB 83 7921f99bd8a3 grafana 1.40% 59.24MiB / 31.41GiB 0.18% 18.8kB / 3.31kB 0B / 24MB 18 efc297b62d50 policy-api 0.11% 592.3MiB / 31.41GiB 1.84% 999kB / 710kB 0B / 0B 52 68e97d183625 compose_zookeeper_1 0.06% 99.59MiB / 31.41GiB 0.31% 55.9kB / 50.2kB 0B / 373kB 60 4c7517fc4ecd prometheus 0.00% 18.29MiB / 31.41GiB 0.06% 1.33kB / 158B 0B / 0B 13 cbd110d47717 mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 994kB / 1.19MB 11.1MB / 67.9MB 37 394007c1fca1 simulator 0.07% 119.9MiB / 31.41GiB 0.37% 1.63kB / 0B 102kB / 0B 76 + echo + cd /tmp/tmp.pn8iy6Kpoj + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.pn8iy6Kpoj/output.xml Log: /tmp/tmp.pn8iy6Kpoj/log.html Report: /tmp/tmp.pn8iy6Kpoj/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:17:08 up 6 min, 0 users, load average: 0.95, 1.23, 0.60 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.1 us, 1.9 sy, 0.0 ni, 84.2 id, 3.8 wa, 0.0 hi, 0.0 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 3.1G 22G 1.3M 6.0G 27G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes policy-api Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 24f93304139d policy-apex-pdp 1.07% 188.4MiB / 31.41GiB 0.59% 56kB / 90.1kB 0B / 0B 52 3c4c7215af5e policy-pap 0.91% 684.1MiB / 31.41GiB 2.13% 2.33MB / 808kB 0B / 153MB 65 22286465a78e kafka 1.17% 393.9MiB / 31.41GiB 1.22% 239kB / 215kB 0B / 606kB 85 7921f99bd8a3 grafana 0.02% 60.67MiB / 31.41GiB 0.19% 19.5kB / 4.26kB 0B / 24MB 18 efc297b62d50 policy-api 0.08% 679.6MiB / 31.41GiB 2.11% 2.49MB / 1.26MB 0B / 0B 55 68e97d183625 compose_zookeeper_1 0.07% 99.6MiB / 31.41GiB 0.31% 58.8kB / 51.8kB 0B / 373kB 60 4c7517fc4ecd prometheus 0.00% 24.86MiB / 31.41GiB 0.08% 184kB / 10.7kB 0B / 0B 14 cbd110d47717 mariadb 0.02% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11.1MB / 68.2MB 28 394007c1fca1 simulator 0.12% 120.1MiB / 31.41GiB 0.37% 1.94kB / 0B 102kB / 0B 78 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, compose_zookeeper_1, prometheus, mariadb, simulator zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-02-20 23:14:48,907] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,913] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,913] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,913] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,913] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,915] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-20 23:14:48,915] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-20 23:14:48,915] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-20 23:14:48,915] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-02-20 23:14:48,916] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-02-20 23:14:48,916] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,917] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,917] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,917] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,917] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-20 23:14:48,917] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-02-20 23:14:48,928] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-02-20 23:14:48,931] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-20 23:14:48,931] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-20 23:14:48,933] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-20 23:14:48,942] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,942] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,943] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,943] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,943] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,943] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,943] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,943] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,943] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,943] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:host.name=68e97d183625 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,944] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,945] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,946] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-02-20 23:14:48,947] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,947] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,948] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-20 23:14:48,948] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-20 23:14:48,949] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-20 23:14:48,949] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-20 23:14:48,949] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-20 23:14:48,949] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-20 23:14:48,949] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-20 23:14:48,949] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-20 23:14:48,951] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,951] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,952] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-20 23:14:48,952] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-20 23:14:48,952] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:48,972] INFO Logging initialized @496ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-02-20 23:14:49,061] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-20 23:14:49,061] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-20 23:14:49,090] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-20 23:14:49,126] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-20 23:14:49,126] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-20 23:14:49,127] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-20 23:14:49,130] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-02-20 23:14:49,139] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-20 23:14:49,157] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-02-20 23:14:49,157] INFO Started @682ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-20 23:14:49,157] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-02-20 23:14:49,163] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-20 23:14:49,164] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-20 23:14:49,166] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-20 23:14:49,167] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-20 23:14:49,182] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-20 23:14:49,182] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-20 23:14:49,183] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-20 23:14:49,183] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-20 23:14:49,188] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-02-20 23:14:49,188] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-20 23:14:49,191] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-20 23:14:49,191] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-20 23:14:49,192] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-20 23:14:49,200] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-02-20 23:14:49,201] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-02-20 23:14:49,215] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-02-20 23:14:49,216] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) mariadb | 2024-02-20 23:14:41+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-20 23:14:41+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-02-20 23:14:41+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-20 23:14:41+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-02-20 23:14:41 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-20 23:14:41 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-20 23:14:41 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-02-20 23:14:42+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-02-20 23:14:42+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-02-20 23:14:42+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-02-20 23:14:42 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... mariadb | 2024-02-20 23:14:42 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-20 23:14:42 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-20 23:14:42 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-20 23:14:42 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-20 23:14:42 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-20 23:14:42 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-20 23:14:42 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-20 23:14:42 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-20 23:14:42 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-20 23:14:43 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-20 23:14:43 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-20 23:14:43 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-20 23:14:43 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-02-20 23:14:43 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-20 23:14:43 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-20 23:14:43 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-20 23:14:43 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-20 23:14:43 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-02-20 23:14:43+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-02-20 23:14:45+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-02-20 23:14:45+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-02-20 23:14:45+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-02-20 23:14:45+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-02-20 23:14:46+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-02-20 23:14:46 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Buffer pool(s) dump completed at 240220 23:14:46 mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Shutdown completed; log sequence number 332890; transaction id 298 mariadb | 2024-02-20 23:14:46 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-02-20 23:14:46+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-02-20 23:14:46+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-02-20 23:14:46 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-20 23:14:46 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-20 23:14:46 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-20 23:14:46 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-20 23:14:46 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-20 23:14:47 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-20 23:14:47 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-20 23:14:47 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-20 23:14:47 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-20 23:14:47 0 [Note] InnoDB: log sequence number 332890; transaction id 299 mariadb | 2024-02-20 23:14:47 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-02-20 23:14:47 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-20 23:14:47 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-20 23:14:47 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-02-20 23:14:47 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-02-20 23:14:47 0 [Note] Server socket created on IP: '::'. mariadb | 2024-02-20 23:14:47 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-02-20 23:14:47 0 [Note] InnoDB: Buffer pool(s) load completed at 240220 23:14:47 mariadb | 2024-02-20 23:14:47 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-02-20 23:14:48 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-02-20 23:14:48 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) mariadb | 2024-02-20 23:14:48 13 [Warning] Aborted connection 13 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) zookeeper_1 | [2024-02-20 23:14:50,636] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.2:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.9:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-02-20T23:15:18.136+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-02-20T23:15:18.336+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = b20135be-18a4-4de4-8569-3ebb4824ad25 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 grafana | logger=settings t=2024-02-20T23:14:44.053536707Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-20T23:14:44Z grafana | logger=settings t=2024-02-20T23:14:44.053726909Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-02-20T23:14:44.05382794Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-02-20T23:14:44.053841Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-02-20T23:14:44.05384465Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-02-20T23:14:44.05384862Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-20T23:14:44.053851481Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-20T23:14:44.053854241Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-02-20T23:14:44.053857541Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-02-20T23:14:44.053861011Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-02-20T23:14:44.053863591Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-20T23:14:44.053986172Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-20T23:14:44.053990262Z level=info msg=Target target=[all] grafana | logger=settings t=2024-02-20T23:14:44.053995982Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-02-20T23:14:44.053999382Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-02-20T23:14:44.054006372Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-02-20T23:14:44.054009302Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-02-20T23:14:44.054012563Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-02-20T23:14:44.054015853Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-02-20T23:14:44.054340407Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-02-20T23:14:44.054360997Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-02-20T23:14:44.055015855Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-02-20T23:14:44.055932777Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-02-20T23:14:44.056721827Z level=info msg="Migration successfully executed" id="create migration_log table" duration=788.58µs grafana | logger=migrator t=2024-02-20T23:14:44.06328978Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-02-20T23:14:44.063858157Z level=info msg="Migration successfully executed" id="create user table" duration=568.117µs grafana | logger=migrator t=2024-02-20T23:14:44.066957846Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-02-20T23:14:44.068236422Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.278476ms grafana | logger=migrator t=2024-02-20T23:14:44.072133462Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-02-20T23:14:44.073420748Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.281977ms grafana | logger=migrator t=2024-02-20T23:14:44.07677106Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-02-20T23:14:44.0775181Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=747.18µs grafana | logger=migrator t=2024-02-20T23:14:44.08312049Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-02-20T23:14:44.084137233Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.015313ms grafana | logger=migrator t=2024-02-20T23:14:44.087483195Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-02-20T23:14:44.092251106Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.767201ms grafana | logger=migrator t=2024-02-20T23:14:44.095271774Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-02-20T23:14:44.095983263Z level=info msg="Migration successfully executed" id="create user table v2" duration=711.299µs grafana | logger=migrator t=2024-02-20T23:14:44.101727465Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-02-20T23:14:44.102540916Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=814.431µs grafana | logger=migrator t=2024-02-20T23:14:44.106120581Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-02-20T23:14:44.107713371Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.59296ms grafana | logger=migrator t=2024-02-20T23:14:44.111335977Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-02-20T23:14:44.111752902Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=416.535µs grafana | logger=migrator t=2024-02-20T23:14:44.116947118Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-02-20T23:14:44.117526565Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=578.927µs grafana | logger=migrator t=2024-02-20T23:14:44.121152121Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-02-20T23:14:44.123113116Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.960375ms grafana | logger=migrator t=2024-02-20T23:14:44.126379717Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-02-20T23:14:44.126420577Z level=info msg="Migration successfully executed" id="Update user table charset" duration=42.62µs grafana | logger=migrator t=2024-02-20T23:14:44.129617968Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-02-20T23:14:44.130838523Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.218075ms grafana | logger=migrator t=2024-02-20T23:14:44.136686227Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-02-20T23:14:44.137145533Z level=info msg="Migration successfully executed" id="Add missing user data" duration=461.496µs grafana | logger=migrator t=2024-02-20T23:14:44.140504665Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-02-20T23:14:44.142488791Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.983465ms grafana | logger=migrator t=2024-02-20T23:14:44.14560068Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-02-20T23:14:44.14643647Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=835.65µs grafana | logger=migrator t=2024-02-20T23:14:44.149364347Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-02-20T23:14:44.150605453Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.241046ms grafana | logger=migrator t=2024-02-20T23:14:44.155467015Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-02-20T23:14:44.164813843Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.346519ms grafana | logger=migrator t=2024-02-20T23:14:44.167883791Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-02-20T23:14:44.168664111Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=779.79µs grafana | logger=migrator t=2024-02-20T23:14:44.171817801Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-02-20T23:14:44.172605921Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=788.24µs grafana | logger=migrator t=2024-02-20T23:14:44.177703175Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-02-20T23:14:44.178668738Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=960.543µs grafana | logger=migrator t=2024-02-20T23:14:44.182169422Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-02-20T23:14:44.183500189Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.326407ms grafana | logger=migrator t=2024-02-20T23:14:44.186714839Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-02-20T23:14:44.187485499Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=770.5µs grafana | logger=migrator t=2024-02-20T23:14:44.19228267Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-02-20T23:14:44.19230745Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=25.69µs grafana | logger=migrator t=2024-02-20T23:14:44.194607879Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-02-20T23:14:44.195300828Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=693.239µs grafana | logger=migrator t=2024-02-20T23:14:44.198197674Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-02-20T23:14:44.198930764Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=733.02µs grafana | logger=migrator t=2024-02-20T23:14:44.203811085Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-02-20T23:14:44.204575225Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=764.36µs grafana | logger=migrator t=2024-02-20T23:14:44.207864617Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-02-20T23:14:44.209055262Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.190995ms grafana | logger=migrator t=2024-02-20T23:14:44.212236772Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-20T23:14:44.216720479Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.485576ms grafana | logger=migrator t=2024-02-20T23:14:44.221385907Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-02-20T23:14:44.222287799Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=901.682µs grafana | logger=migrator t=2024-02-20T23:14:44.22555571Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-02-20T23:14:44.226403531Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=847.881µs grafana | logger=migrator t=2024-02-20T23:14:44.229320668Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-02-20T23:14:44.230180989Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=860.191µs grafana | logger=migrator t=2024-02-20T23:14:44.235331114Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-02-20T23:14:44.236127264Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=791.16µs grafana | logger=migrator t=2024-02-20T23:14:44.238898039Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-02-20T23:14:44.23980704Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=907.671µs grafana | logger=migrator t=2024-02-20T23:14:44.243050831Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-02-20T23:14:44.24372464Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=673.579µs grafana | logger=migrator t=2024-02-20T23:14:44.246866439Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-02-20T23:14:44.247748031Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=880.702µs grafana | logger=migrator t=2024-02-20T23:14:44.252912956Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-02-20T23:14:44.253359571Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=449.375µs grafana | logger=migrator t=2024-02-20T23:14:44.257567475Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-02-20T23:14:44.258649658Z level=info msg="Migration successfully executed" id="create star table" duration=1.081273ms grafana | logger=migrator t=2024-02-20T23:14:44.262054301Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-02-20T23:14:44.263330757Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.276386ms grafana | logger=migrator t=2024-02-20T23:14:44.268803297Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-02-20T23:14:44.269574996Z level=info msg="Migration successfully executed" id="create org table v1" duration=771.199µs grafana | logger=migrator t=2024-02-20T23:14:44.27460805Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-02-20T23:15:18.533+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-02-20T23:15:18.533+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-02-20T23:15:18.533+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470918532 policy-apex-pdp | [2024-02-20T23:15:18.537+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-1, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-02-20T23:15:18.550+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-02-20T23:15:18.550+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-02-20T23:15:18.554+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b20135be-18a4-4de4-8569-3ebb4824ad25, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-02-20T23:15:18.592+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2 policy-apex-pdp | client.rack = kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | SLF4J: Class path contains multiple SLF4J bindings. kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] kafka | [2024-02-20 23:14:50,567] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,568] INFO Client environment:host.name=22286465a78e (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,568] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,568] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,568] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,568] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,568] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,569] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,572] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@2fd6b6c7 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,576] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-02-20 23:14:50,580] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-02-20 23:14:50,587] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-20 23:14:50,610] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-20 23:14:50,611] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-20 23:14:50,620] INFO Socket connection established, initiating session, client: /172.17.0.9:57464, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-20 23:14:50,663] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000433ff0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-20 23:14:50,783] INFO Session: 0x100000433ff0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:50,784] INFO EventThread shut down for session: 0x100000433ff0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-02-20 23:14:51,430] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-02-20 23:14:51,759] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-02-20 23:14:51,828] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-02-20 23:14:51,829] INFO starting (kafka.server.KafkaServer) kafka | [2024-02-20 23:14:51,829] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-02-20 23:14:51,842] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-20 23:14:51,846] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:host.name=22286465a78e (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,846] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,847] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,847] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,848] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5b619d14 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-20 23:14:51,851] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-02-20 23:14:51,857] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-02-20T23:14:44.275933337Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.321757ms grafana | logger=migrator t=2024-02-20T23:14:44.279280599Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-02-20T23:14:44.280466824Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.185445ms grafana | logger=migrator t=2024-02-20T23:14:44.283836027Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-02-20T23:14:44.285178694Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.342717ms grafana | logger=migrator t=2024-02-20T23:14:44.290802825Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-02-20T23:14:44.291662275Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=863.18µs grafana | logger=migrator t=2024-02-20T23:14:44.295091739Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-02-20T23:14:44.296398445Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.306436ms grafana | logger=migrator t=2024-02-20T23:14:44.299592676Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-02-20T23:14:44.299626676Z level=info msg="Migration successfully executed" id="Update org table charset" duration=43.9µs grafana | logger=migrator t=2024-02-20T23:14:44.302533563Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-02-20T23:14:44.302553503Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=20.98µs grafana | logger=migrator t=2024-02-20T23:14:44.30784236Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-02-20T23:14:44.308110453Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=268.263µs grafana | logger=migrator t=2024-02-20T23:14:44.312307436Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-02-20T23:14:44.313590093Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.282567ms grafana | logger=migrator t=2024-02-20T23:14:44.31735001Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-02-20T23:14:44.318373883Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.019263ms grafana | logger=migrator t=2024-02-20T23:14:44.321413651Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-02-20T23:14:44.322566456Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.147075ms grafana | logger=migrator t=2024-02-20T23:14:44.328022475Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-02-20T23:14:44.328610242Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=587.067µs grafana | logger=migrator t=2024-02-20T23:14:44.331427178Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-02-20T23:14:44.332546252Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.118504ms grafana | logger=migrator t=2024-02-20T23:14:44.335750073Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-02-20T23:14:44.336728065Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=977.652µs grafana | logger=migrator t=2024-02-20T23:14:44.345344024Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-02-20T23:14:44.359715905Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=14.371371ms grafana | logger=migrator t=2024-02-20T23:14:44.364697728Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-02-20T23:14:44.366892196Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=2.195848ms grafana | logger=migrator t=2024-02-20T23:14:44.369573Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-02-20T23:14:44.370411611Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=838.681µs grafana | logger=migrator t=2024-02-20T23:14:44.376050952Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-02-20T23:14:44.376860042Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=805.29µs grafana | logger=migrator t=2024-02-20T23:14:44.379633357Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-02-20T23:14:44.379989952Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=356.385µs grafana | logger=migrator t=2024-02-20T23:14:44.382110408Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-02-20T23:14:44.382961839Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=851.191µs grafana | logger=migrator t=2024-02-20T23:14:44.387993703Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-02-20T23:14:44.388060674Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=67.601µs grafana | logger=migrator t=2024-02-20T23:14:44.390103869Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-02-20T23:14:44.392043244Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.939285ms grafana | logger=migrator t=2024-02-20T23:14:44.394596866Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-02-20T23:14:44.396431209Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.834213ms grafana | logger=migrator t=2024-02-20T23:14:44.401646465Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.404443661Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.796476ms grafana | logger=migrator t=2024-02-20T23:14:44.406974093Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.408183478Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.215166ms grafana | logger=migrator t=2024-02-20T23:14:44.411193356Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.413209091Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.017255ms grafana | logger=migrator t=2024-02-20T23:14:44.419824165Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" kafka | [2024-02-20 23:14:51,858] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-20 23:14:51,863] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-20 23:14:51,872] INFO Socket connection established, initiating session, client: /172.17.0.9:57466, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-20 23:14:51,881] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000433ff0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-20 23:14:51,885] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-20 23:14:52,200] INFO Cluster ID = On8LQcwTQAOf3IoV4hs6OA (kafka.server.KafkaServer) kafka | [2024-02-20 23:14:52,203] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-02-20 23:14:52,257] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT grafana | logger=migrator t=2024-02-20T23:14:44.421112301Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.288106ms grafana | logger=migrator t=2024-02-20T23:14:44.424282681Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-02-20T23:14:44.425631918Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.348597ms grafana | logger=migrator t=2024-02-20T23:14:44.428713927Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-02-20T23:14:44.428738408Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=25.481µs grafana | logger=migrator t=2024-02-20T23:14:44.433009142Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-02-20T23:14:44.433033812Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=25.57µs grafana | logger=migrator t=2024-02-20T23:14:44.435797847Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.438787145Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.981877ms grafana | logger=migrator t=2024-02-20T23:14:44.442968567Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.445114135Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.146427ms grafana | logger=migrator t=2024-02-20T23:14:44.447783038Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.449800424Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.018676ms grafana | logger=migrator t=2024-02-20T23:14:44.454122628Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.456136134Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.010126ms grafana | logger=migrator t=2024-02-20T23:14:44.458709246Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.45897525Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=260.214µs grafana | logger=migrator t=2024-02-20T23:14:44.461835596Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-02-20T23:14:44.462654796Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=818.96µs grafana | logger=migrator t=2024-02-20T23:14:44.466988701Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-02-20T23:14:44.467757851Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=769.38µs grafana | logger=migrator t=2024-02-20T23:14:44.470461645Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-02-20T23:14:44.470500885Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=42.28µs grafana | logger=migrator t=2024-02-20T23:14:44.47326154Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-02-20T23:14:44.474514926Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.252936ms grafana | logger=migrator t=2024-02-20T23:14:44.479712042Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-02-20T23:14:44.480741545Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.029063ms grafana | logger=migrator t=2024-02-20T23:14:44.483614901Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-20T23:14:44.493227612Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=9.613301ms grafana | logger=migrator t=2024-02-20T23:14:44.495887876Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-02-20T23:14:44.496601735Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=712.759µs grafana | logger=migrator t=2024-02-20T23:14:44.501369305Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-02-20T23:14:44.502210946Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=842.001µs grafana | logger=migrator t=2024-02-20T23:14:44.50494055Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-02-20T23:14:44.505865742Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=925.002µs grafana | logger=migrator t=2024-02-20T23:14:44.508582446Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-02-20T23:14:44.508912311Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=329.585µs grafana | logger=migrator t=2024-02-20T23:14:44.511414532Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-02-20T23:14:44.512146092Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=729.529µs grafana | logger=migrator t=2024-02-20T23:14:44.51831746Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-02-20T23:14:44.52150916Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.188971ms grafana | logger=migrator t=2024-02-20T23:14:44.524276155Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-02-20T23:14:44.525546901Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.270756ms grafana | logger=migrator t=2024-02-20T23:14:44.528719611Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-02-20T23:14:44.528970814Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=251.143µs grafana | logger=migrator t=2024-02-20T23:14:44.533504601Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-02-20T23:14:44.533711794Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=207.033µs grafana | logger=migrator t=2024-02-20T23:14:44.536210576Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-02-20T23:14:44.537572403Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.361697ms grafana | logger=migrator t=2024-02-20T23:14:44.54053835Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-02-20T23:14:44.544024694Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.486284ms grafana | logger=migrator t=2024-02-20T23:14:44.547281355Z level=info msg="Executing migration" id="create data_source table" policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = b20135be-18a4-4de4-8569-3ebb4824ad25 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 grafana | logger=migrator t=2024-02-20T23:14:44.548248758Z level=info msg="Migration successfully executed" id="create data_source table" duration=972.543µs grafana | logger=migrator t=2024-02-20T23:14:44.552210078Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-02-20T23:14:44.553010018Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=795.99µs grafana | logger=migrator t=2024-02-20T23:14:44.555819613Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-02-20T23:14:44.556672004Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=852.391µs grafana | logger=migrator t=2024-02-20T23:14:44.559451549Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-02-20T23:14:44.56029379Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=842.631µs grafana | logger=migrator t=2024-02-20T23:14:44.564847647Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-02-20T23:14:44.56584217Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=991.993µs grafana | logger=migrator t=2024-02-20T23:14:44.568925359Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-02-20T23:14:44.580738388Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.814229ms grafana | logger=migrator t=2024-02-20T23:14:44.583728846Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-02-20T23:14:44.584575297Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=846.391µs grafana | logger=migrator t=2024-02-20T23:14:44.588472876Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-02-20T23:14:44.589362027Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=889.881µs grafana | logger=migrator t=2024-02-20T23:14:44.592017781Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-02-20T23:14:44.592890082Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=871.991µs grafana | logger=migrator t=2024-02-20T23:14:44.597292447Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-02-20T23:14:44.597944566Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=651.879µs grafana | logger=migrator t=2024-02-20T23:14:44.60066643Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-02-20T23:14:44.60304353Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.37627ms grafana | logger=migrator t=2024-02-20T23:14:44.605591242Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-02-20T23:14:44.607916082Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.32834ms grafana | logger=migrator t=2024-02-20T23:14:44.611930232Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-02-20T23:14:44.612032304Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=102.942µs grafana | logger=migrator t=2024-02-20T23:14:44.614216481Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-02-20T23:14:44.614450174Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=233.843µs grafana | logger=migrator t=2024-02-20T23:14:44.617022677Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-02-20T23:14:44.619358666Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.339319ms grafana | logger=migrator t=2024-02-20T23:14:44.62205702Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-02-20T23:14:44.622310694Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=253.734µs grafana | logger=migrator t=2024-02-20T23:14:44.626192703Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-02-20T23:14:44.626416075Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=223.262µs grafana | logger=migrator t=2024-02-20T23:14:44.62911698Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-02-20T23:14:44.632168568Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.050258ms grafana | logger=migrator t=2024-02-20T23:14:44.636226549Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-02-20T23:14:44.636594364Z level=info msg="Migration successfully executed" id="Update uid value" duration=373.745µs grafana | logger=migrator t=2024-02-20T23:14:44.639603482Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-02-20T23:14:44.640482613Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=879.001µs grafana | logger=migrator t=2024-02-20T23:14:44.644318342Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-02-20T23:14:44.645172052Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=853.56µs grafana | logger=migrator t=2024-02-20T23:14:44.647907187Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-02-20T23:14:44.648648606Z level=info msg="Migration successfully executed" id="create api_key table" duration=741.149µs grafana | logger=migrator t=2024-02-20T23:14:44.651466902Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-02-20T23:14:44.652287872Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=815.28µs grafana | logger=migrator t=2024-02-20T23:14:44.656428225Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-02-20T23:14:44.657268285Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=840µs grafana | logger=migrator t=2024-02-20T23:14:44.660318864Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-02-20T23:14:44.661201275Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=882.141µs grafana | logger=migrator t=2024-02-20T23:14:44.666996968Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-02-20T23:14:44.667788968Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=792.18µs grafana | logger=migrator t=2024-02-20T23:14:44.672079422Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-02-20T23:14:44.673221807Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.141145ms kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-02-20T23:15:18.612+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-02-20T23:15:18.613+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-20T23:14:44.676436057Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-02-20T23:14:44.677765554Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.325867ms grafana | logger=migrator t=2024-02-20T23:14:44.682511734Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-02-20T23:14:44.691323866Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.812012ms grafana | logger=migrator t=2024-02-20T23:14:44.694398354Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-02-20T23:14:44.695528189Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.129135ms grafana | logger=migrator t=2024-02-20T23:14:44.699148074Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-02-20T23:14:44.700524372Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.376518ms grafana | logger=migrator t=2024-02-20T23:14:44.704685624Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-02-20T23:14:44.706128683Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.443179ms grafana | logger=migrator t=2024-02-20T23:14:44.709401814Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-02-20T23:14:44.710956714Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.54845ms grafana | logger=migrator t=2024-02-20T23:14:44.71459271Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-02-20T23:14:44.715030245Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=440.925µs grafana | logger=migrator t=2024-02-20T23:14:44.718836503Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-02-20T23:14:44.719510922Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=674.029µs grafana | logger=migrator t=2024-02-20T23:14:44.722254246Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-02-20T23:14:44.722348608Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=72.361µs grafana | logger=migrator t=2024-02-20T23:14:44.725199154Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-02-20T23:14:44.727936298Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.736764ms grafana | logger=migrator t=2024-02-20T23:14:44.731320931Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-02-20T23:14:44.734001265Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.677254ms grafana | logger=migrator t=2024-02-20T23:14:44.736697099Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-02-20T23:14:44.736934742Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=237.583µs grafana | logger=migrator t=2024-02-20T23:14:44.739873769Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-02-20T23:14:44.742548813Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.672074ms grafana | logger=migrator t=2024-02-20T23:14:44.745292578Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-02-20T23:14:44.748119303Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.826505ms grafana | logger=migrator t=2024-02-20T23:14:44.75184438Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-02-20T23:14:44.75264013Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=800.72µs grafana | logger=migrator t=2024-02-20T23:14:44.755475106Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-02-20T23:14:44.756116324Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=641.058µs grafana | logger=migrator t=2024-02-20T23:14:44.759980623Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-02-20T23:14:44.760828994Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=848.141µs grafana | logger=migrator t=2024-02-20T23:14:44.763919093Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-02-20T23:14:44.764807274Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=888.201µs grafana | logger=migrator t=2024-02-20T23:14:44.767952164Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-02-20T23:14:44.768820525Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=868.141µs grafana | logger=migrator t=2024-02-20T23:14:44.772343049Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-02-20T23:14:44.77321813Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=875.061µs grafana | logger=migrator t=2024-02-20T23:14:44.776072056Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-02-20T23:14:44.776212028Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=142.822µs grafana | logger=migrator t=2024-02-20T23:14:44.778597068Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-02-20T23:14:44.778674409Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=81.181µs grafana | logger=migrator t=2024-02-20T23:14:44.782119073Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-02-20T23:14:44.786259545Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.138372ms grafana | logger=migrator t=2024-02-20T23:14:44.790724182Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-02-20T23:14:44.793596538Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.866126ms grafana | logger=migrator t=2024-02-20T23:14:44.796405393Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-02-20T23:14:44.796554685Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=149.592µs grafana | logger=migrator t=2024-02-20T23:14:44.800291743Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-02-20T23:14:44.801085583Z level=info msg="Migration successfully executed" id="create quota table v1" duration=793.88µs policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:44.805248105Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-02-20T23:14:44.806773224Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.523769ms grafana | logger=migrator t=2024-02-20T23:14:44.810027216Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-02-20T23:14:44.810203158Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=183.843µs grafana | logger=migrator t=2024-02-20T23:14:44.813295867Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-02-20T23:14:44.814539403Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.242896ms grafana | logger=migrator t=2024-02-20T23:14:44.817577071Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-02-20T23:14:44.818810977Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.234676ms grafana | logger=migrator t=2024-02-20T23:14:44.822897218Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-02-20T23:14:44.825967577Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.070319ms grafana | logger=migrator t=2024-02-20T23:14:44.828668821Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-02-20T23:14:44.828743162Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=74.821µs grafana | logger=migrator t=2024-02-20T23:14:44.831548438Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-02-20T23:14:44.832438989Z level=info msg="Migration successfully executed" id="create session table" duration=889.992µs grafana | logger=migrator t=2024-02-20T23:14:44.835794761Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-02-20T23:14:44.835975843Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=181.122µs grafana | logger=migrator t=2024-02-20T23:14:44.838330943Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-02-20T23:14:44.838501465Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=170.402µs grafana | logger=migrator t=2024-02-20T23:14:44.841558594Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-02-20T23:14:44.842401305Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=842.511µs grafana | logger=migrator t=2024-02-20T23:14:44.846495476Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-02-20T23:14:44.84759692Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.101164ms grafana | logger=migrator t=2024-02-20T23:14:44.850551518Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-02-20T23:14:44.850683259Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=126.191µs grafana | logger=migrator t=2024-02-20T23:14:44.853984001Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-02-20T23:14:44.854065892Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=80.061µs grafana | logger=migrator t=2024-02-20T23:14:44.857277853Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-02-20T23:14:44.860493813Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.21523ms grafana | logger=migrator t=2024-02-20T23:14:44.864322802Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-02-20T23:14:44.867550462Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.22746ms grafana | logger=migrator t=2024-02-20T23:14:44.870477689Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-02-20T23:14:44.870648782Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=170.983µs grafana | logger=migrator t=2024-02-20T23:14:44.873534618Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-02-20T23:14:44.873729521Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=194.842µs grafana | logger=migrator t=2024-02-20T23:14:44.877272015Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-02-20T23:14:44.878251268Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=979.463µs grafana | logger=migrator t=2024-02-20T23:14:44.883744847Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-02-20T23:14:44.883899109Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=154.162µs grafana | logger=migrator t=2024-02-20T23:14:44.887053269Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-02-20T23:14:44.892122473Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.068734ms grafana | logger=migrator t=2024-02-20T23:14:44.895321493Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-02-20T23:14:44.895613307Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=298.954µs grafana | logger=migrator t=2024-02-20T23:14:44.899434055Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-02-20T23:14:44.902902839Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.467554ms grafana | logger=migrator t=2024-02-20T23:14:44.906357823Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-02-20T23:14:44.909598704Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.240961ms grafana | logger=migrator t=2024-02-20T23:14:44.91244618Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-02-20T23:14:44.912600062Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=153.682µs grafana | logger=migrator t=2024-02-20T23:14:44.915590149Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-02-20T23:14:44.916630363Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.040224ms grafana | logger=migrator t=2024-02-20T23:14:44.921391953Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-02-20T23:14:44.922866781Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.480198ms grafana | logger=migrator t=2024-02-20T23:14:44.926634719Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-02-20T23:14:44.92826046Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.61247ms grafana | logger=migrator t=2024-02-20T23:14:44.932338451Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-02-20T23:14:44.933402455Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.063874ms grafana | logger=migrator t=2024-02-20T23:14:44.936424123Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-02-20T23:14:44.937326774Z level=info msg="Migration successfully executed" id="add index alert state" duration=902.731µs grafana | logger=migrator t=2024-02-20T23:14:44.940221521Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-02-20T23:14:44.941119432Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=897.801µs grafana | logger=migrator t=2024-02-20T23:14:44.945191764Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-02-20T23:14:44.945991124Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=799.391µs grafana | logger=migrator t=2024-02-20T23:14:44.949432777Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-02-20T23:14:44.950932926Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.500169ms grafana | logger=migrator t=2024-02-20T23:14:44.954028545Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-02-20T23:14:44.955048118Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.020343ms grafana | logger=migrator t=2024-02-20T23:14:44.958798915Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-02-20T23:14:44.973883246Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=15.084001ms grafana | logger=migrator t=2024-02-20T23:14:44.97737851Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-02-20T23:14:44.978246921Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=869.151µs grafana | logger=migrator t=2024-02-20T23:14:44.98131454Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-02-20T23:14:44.98207066Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=755.809µs grafana | logger=migrator t=2024-02-20T23:14:44.986011989Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-02-20T23:14:44.986683248Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=382.215µs grafana | logger=migrator t=2024-02-20T23:14:44.990258923Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-02-20T23:14:44.991063523Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=804.12µs grafana | logger=migrator t=2024-02-20T23:14:44.994503407Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-02-20T23:14:44.995631941Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.124204ms grafana | logger=migrator t=2024-02-20T23:14:44.999366218Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-02-20T23:14:45.003242208Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.87615ms grafana | logger=migrator t=2024-02-20T23:14:45.006281055Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-02-20T23:14:45.010010318Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.728953ms grafana | logger=migrator t=2024-02-20T23:14:45.012858743Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-02-20T23:14:45.016917428Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.058275ms grafana | logger=migrator t=2024-02-20T23:14:45.021587769Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-02-20T23:14:45.027215748Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=5.632189ms grafana | logger=migrator t=2024-02-20T23:14:45.029891421Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-02-20T23:14:45.03087594Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=979.439µs grafana | logger=migrator t=2024-02-20T23:14:45.257700236Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-02-20T23:14:45.257740156Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=43.18µs grafana | logger=migrator t=2024-02-20T23:14:45.260675202Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-02-20T23:14:45.260695592Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=20.99µs grafana | logger=migrator t=2024-02-20T23:14:45.263066622Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-02-20T23:14:45.263647257Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=577.135µs grafana | logger=migrator t=2024-02-20T23:14:45.269232655Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-02-20T23:14:45.269878991Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=646.226µs grafana | logger=migrator t=2024-02-20T23:14:45.272655705Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-02-20T23:14:45.273162139Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=505.234µs grafana | logger=migrator t=2024-02-20T23:14:45.276478778Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-02-20T23:14:45.277014793Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=535.724µs grafana | logger=migrator t=2024-02-20T23:14:45.279854627Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-02-20T23:14:45.280569093Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=714.446µs grafana | logger=migrator t=2024-02-20T23:14:45.284296335Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-02-20T23:14:45.286889547Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.593162ms grafana | logger=migrator t=2024-02-20T23:14:45.291605578Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.8) policy-api | policy-api | [2024-02-20T23:14:55.407+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 19 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-02-20T23:14:55.409+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-02-20T23:14:57.047+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-02-20T23:14:57.142+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 85 ms. Found 6 JPA repository interfaces. policy-api | [2024-02-20T23:14:57.526+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-02-20T23:14:57.527+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-02-20T23:14:58.155+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-02-20T23:14:58.165+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-02-20T23:14:58.166+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-02-20T23:14:58.167+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-api | [2024-02-20T23:14:58.258+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-02-20T23:14:58.259+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2785 ms policy-api | [2024-02-20T23:14:58.652+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-02-20T23:14:58.729+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-02-20T23:14:58.732+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-02-20T23:14:58.780+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-02-20T23:14:59.144+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-02-20T23:14:59.165+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-02-20T23:14:59.258+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@a0db585 policy-api | [2024-02-20T23:14:59.260+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-02-20T23:14:59.287+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-api | [2024-02-20T23:14:59.289+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-api | [2024-02-20T23:15:01.135+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-02-20T23:15:01.138+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-02-20T23:15:02.130+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-02-20T23:15:02.966+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-02-20T23:15:04.061+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-02-20T23:15:04.307+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@c7a7d3, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@79462469, org.springframework.security.web.context.SecurityContextHolderFilter@673ade3d, org.springframework.security.web.header.HeaderWriterFilter@6e2ab1f4, org.springframework.security.web.authentication.logout.LogoutFilter@39d666e0, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@547a79cd, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@4529b266, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6aca85da, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3341ba8e, org.springframework.security.web.access.ExceptionTranslationFilter@495fa126, org.springframework.security.web.access.intercept.AuthorizationFilter@206d4413] policy-api | [2024-02-20T23:15:05.143+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-02-20T23:15:05.243+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-02-20T23:15:05.286+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-02-20T23:15:05.305+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.657 seconds (process running for 11.266) policy-api | [2024-02-20T23:15:21.831+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-02-20T23:15:21.831+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-02-20T23:15:21.832+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-02-20T23:15:22.128+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null policy-apex-pdp | [2024-02-20T23:15:18.613+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470918612 policy-apex-pdp | [2024-02-20T23:15:18.614+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-02-20T23:15:18.614+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d2486706-79ff-4479-b968-3a75721f5bf9, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-02-20T23:15:18.637+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-02-20T23:15:18.660+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-02-20T23:15:18.703+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-02-20T23:15:18.703+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-02-20T23:15:18.703+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470918703 policy-apex-pdp | [2024-02-20T23:15:18.704+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d2486706-79ff-4479-b968-3a75721f5bf9, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-02-20T23:15:18.704+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-02-20T23:15:18.704+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-02-20T23:15:18.707+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-02-20T23:15:18.707+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-02-20T23:15:18.709+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-02-20T23:15:18.709+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-02-20T23:15:18.709+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-02-20T23:15:18.709+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b20135be-18a4-4de4-8569-3ebb4824ad25, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 policy-apex-pdp | [2024-02-20T23:15:18.711+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b20135be-18a4-4de4-8569-3ebb4824ad25, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-02-20T23:15:18.711+00:00|INFO|ServiceManager|main] service manager starting Create REST server grafana | logger=migrator t=2024-02-20T23:14:45.294342762Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.740474ms grafana | logger=migrator t=2024-02-20T23:14:45.297543229Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-02-20T23:14:45.297692241Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=149.372µs grafana | logger=migrator t=2024-02-20T23:14:45.301768126Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-02-20T23:14:45.302367571Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=599.475µs grafana | logger=migrator t=2024-02-20T23:14:45.307420514Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-02-20T23:14:45.307963539Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=545.755µs grafana | logger=migrator t=2024-02-20T23:14:45.311837042Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-02-20T23:14:45.314489895Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.649713ms grafana | logger=migrator t=2024-02-20T23:14:45.317996795Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-02-20T23:14:45.318161687Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=165.362µs grafana | logger=migrator t=2024-02-20T23:14:45.321774808Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-02-20T23:14:45.322453184Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=682.636µs grafana | logger=migrator t=2024-02-20T23:14:45.326060305Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-02-20T23:14:45.326932203Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=871.317µs grafana | logger=migrator t=2024-02-20T23:14:45.329837597Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-02-20T23:14:45.33010393Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=244.623µs grafana | logger=migrator t=2024-02-20T23:14:45.332919304Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-02-20T23:14:45.333826942Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=907.318µs grafana | logger=migrator t=2024-02-20T23:14:45.378691358Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-02-20T23:14:45.384025684Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=5.334016ms grafana | logger=migrator t=2024-02-20T23:14:45.389043127Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-02-20T23:14:45.389759824Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=716.617µs grafana | logger=migrator t=2024-02-20T23:14:45.392394536Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-02-20T23:14:45.393289594Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=895.018µs grafana | logger=migrator t=2024-02-20T23:14:45.4090281Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-02-20T23:14:45.411243659Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=2.21559ms grafana | logger=migrator t=2024-02-20T23:14:45.415630306Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-02-20T23:14:45.418557751Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=2.928485ms grafana | logger=migrator t=2024-02-20T23:14:45.423752666Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-02-20T23:14:45.423782277Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=33.991µs grafana | logger=migrator t=2024-02-20T23:14:45.426369749Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.431013979Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.64386ms grafana | logger=migrator t=2024-02-20T23:14:45.436329595Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-02-20T23:14:45.437176122Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=847.177µs grafana | logger=migrator t=2024-02-20T23:14:45.442861431Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.445684745Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.823244ms grafana | logger=migrator t=2024-02-20T23:14:45.496302701Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-02-20T23:14:45.498928874Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=2.625703ms grafana | logger=migrator t=2024-02-20T23:14:45.503887597Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-02-20T23:14:45.504842175Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=954.388µs grafana | logger=migrator t=2024-02-20T23:14:45.54606866Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-02-20T23:14:45.547528993Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.460103ms grafana | logger=migrator t=2024-02-20T23:14:45.552592386Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-02-20T23:14:45.569187599Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.594773ms grafana | logger=migrator t=2024-02-20T23:14:45.574271863Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-02-20T23:14:45.574812247Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=539.374µs grafana | logger=migrator t=2024-02-20T23:14:45.579835181Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-02-20T23:14:45.580750759Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=915.488µs grafana | logger=migrator t=2024-02-20T23:14:45.583907466Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-02-20T23:14:45.584198338Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=287.022µs grafana | logger=migrator t=2024-02-20T23:14:45.58670178Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-02-20T23:14:45.587195964Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=493.954µs grafana | logger=migrator t=2024-02-20T23:14:45.621045455Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-02-20T23:14:45.621406669Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=362.264µs grafana | logger=migrator t=2024-02-20T23:14:45.62618349Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-apex-pdp | [2024-02-20T23:15:18.723+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-02-20T23:15:18.732+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0a94d7ed-e196-4974-aa3c-6b85b0ba5fa6","timestampMs":1708470918709,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-20T23:15:18.988+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-02-20T23:15:18.989+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-02-20T23:15:18.989+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-02-20T23:15:18.989+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-02-20T23:15:19.000+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-02-20T23:15:19.000+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-02-20T23:15:19.001+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-02-20 23:14:52,284] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-02-20 23:14:52,286] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-02-20 23:14:52,288] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-02-20 23:14:52,294] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-02-20 23:14:52,321] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-02-20 23:14:52,326] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-02-20 23:14:52,335] INFO Loaded 0 logs in 14ms (kafka.log.LogManager) kafka | [2024-02-20 23:14:52,337] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-02-20 23:14:52,338] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-02-20 23:14:52,348] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-02-20 23:14:52,392] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-02-20 23:14:52,425] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-02-20 23:14:52,437] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-02-20 23:14:52,462] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-02-20 23:14:52,782] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-02-20 23:14:52,799] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-02-20 23:14:52,799] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-02-20 23:14:52,804] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-02-20 23:14:52,808] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-02-20 23:14:52,829] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:52,830] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:52,832] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:52,833] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:52,837] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:52,843] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-02-20 23:14:52,846] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-02-20 23:14:52,866] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-02-20 23:14:52,890] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1708470892880,1708470892880,1,0,0,72057612090146817,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-02-20 23:14:52,891] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-02-20 23:14:52,939] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-02-20 23:14:52,945] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:52,950] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:52,950] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:52,959] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-02-20 23:14:52,963] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-02-20 23:14:40,498 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-02-20 23:14:40,553 INFO org.onap.policy.models.simulators starting simulator | 2024-02-20 23:14:40,553 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-02-20 23:14:40,732 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-02-20 23:14:40,733 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-02-20 23:14:40,833 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-02-20 23:14:40,844 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-20 23:14:40,846 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-20 23:14:40,851 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-02-20 23:14:40,913 INFO Session workerName=node0 simulator | 2024-02-20 23:14:41,433 INFO Using GSON for REST calls simulator | 2024-02-20 23:14:41,513 INFO Started o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE} simulator | 2024-02-20 23:14:41,520 INFO Started A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-02-20 23:14:41,527 INFO Started Server@2a2c13a8{STARTING}[11.0.20,sto=0] @1499ms simulator | 2024-02-20 23:14:41,527 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4319 ms. simulator | 2024-02-20 23:14:41,533 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-02-20 23:14:41,536 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-02-20 23:14:41,537 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-20 23:14:41,538 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-20 23:14:41,539 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-02-20 23:14:41,550 INFO Session workerName=node0 simulator | 2024-02-20 23:14:41,606 INFO Using GSON for REST calls simulator | 2024-02-20 23:14:41,619 INFO Started o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE} simulator | 2024-02-20 23:14:41,621 INFO Started SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-02-20 23:14:41,621 INFO Started Server@62452cc9{STARTING}[11.0.20,sto=0] @1593ms policy-apex-pdp | [2024-02-20T23:15:19.000+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-02-20T23:15:19.127+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Cluster ID: On8LQcwTQAOf3IoV4hs6OA policy-apex-pdp | [2024-02-20T23:15:19.127+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: On8LQcwTQAOf3IoV4hs6OA policy-apex-pdp | [2024-02-20T23:15:19.130+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-02-20T23:15:19.130+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-02-20T23:15:19.136+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] (Re-)joining group policy-apex-pdp | [2024-02-20T23:15:19.150+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Request joining group due to: need to re-join with the given member-id: consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2-37c4ee8e-f029-4055-ab88-8e8d67026094 policy-apex-pdp | [2024-02-20T23:15:19.150+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-02-20T23:15:19.151+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] (Re-)joining group policy-apex-pdp | [2024-02-20T23:15:19.696+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-02-20T23:15:19.698+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-02-20T23:15:22.155+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2-37c4ee8e-f029-4055-ab88-8e8d67026094', protocol='range'} policy-apex-pdp | [2024-02-20T23:15:22.164+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Finished assignment for group at generation 1: {consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2-37c4ee8e-f029-4055-ab88-8e8d67026094=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-02-20T23:15:22.173+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2-37c4ee8e-f029-4055-ab88-8e8d67026094', protocol='range'} policy-apex-pdp | [2024-02-20T23:15:22.176+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-02-20T23:15:22.178+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-02-20T23:15:22.187+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-02-20T23:15:22.200+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2, groupId=b20135be-18a4-4de4-8569-3ebb4824ad25] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-02-20T23:15:38.710+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2eeee666-4d15-4cfd-8347-ff3503ae3470","timestampMs":1708470938710,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-20T23:15:38.735+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2eeee666-4d15-4cfd-8347-ff3503ae3470","timestampMs":1708470938710,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-20T23:15:38.738+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-20T23:15:38.898+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"244fccc0-f73f-49b9-b667-3414ddacd90b","timestampMs":1708470938843,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:38.907+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-02-20T23:15:38.907+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"daa43d58-3a45-4c97-aacd-3f032a1af7e7","timestampMs":1708470938907,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-20T23:15:38.908+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"244fccc0-f73f-49b9-b667-3414ddacd90b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"69cac102-f9ed-4ecb-9ec0-f0d4e09326b1","timestampMs":1708470938907,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:38.916+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"daa43d58-3a45-4c97-aacd-3f032a1af7e7","timestampMs":1708470938907,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-02-20T23:15:38.916+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-20T23:15:38.917+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"244fccc0-f73f-49b9-b667-3414ddacd90b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"69cac102-f9ed-4ecb-9ec0-f0d4e09326b1","timestampMs":1708470938907,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:38.917+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-20T23:15:38.958+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"883bd020-d766-4b04-85c5-046e0e372bb6","timestampMs":1708470938844,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:38.960+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"883bd020-d766-4b04-85c5-046e0e372bb6","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"dde314cf-626e-4fce-8471-18e13ff86a82","timestampMs":1708470938960,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:38.970+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"883bd020-d766-4b04-85c5-046e0e372bb6","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"dde314cf-626e-4fce-8471-18e13ff86a82","timestampMs":1708470938960,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:38.970+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-20T23:15:38.995+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fc3737ad-ad98-4ffa-9216-698c7518a46d","timestampMs":1708470938971,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:38.996+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:45.63203066Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.84804ms grafana | logger=migrator t=2024-02-20T23:14:45.640384412Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.647803256Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=7.418574ms grafana | logger=migrator t=2024-02-20T23:14:45.652265444Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.653179972Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=914.448µs grafana | logger=migrator t=2024-02-20T23:14:45.657842923Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.658484808Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=641.675µs grafana | logger=migrator t=2024-02-20T23:14:45.661367393Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-02-20T23:14:45.661533924Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=166.651µs grafana | logger=migrator t=2024-02-20T23:14:45.665940782Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-02-20T23:14:45.676274691Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=10.332259ms grafana | logger=migrator t=2024-02-20T23:14:45.679120306Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-02-20T23:14:45.679792432Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=672.826µs grafana | logger=migrator t=2024-02-20T23:14:45.682405024Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-02-20T23:14:45.682574335Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=169.511µs grafana | logger=migrator t=2024-02-20T23:14:45.686463449Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-02-20T23:14:45.686847632Z level=info msg="Migration successfully executed" id="Move region to single row" duration=384.363µs grafana | logger=migrator t=2024-02-20T23:14:45.690196411Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.691856596Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.663595ms grafana | logger=migrator t=2024-02-20T23:14:45.695511717Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.696320814Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=811.787µs grafana | logger=migrator t=2024-02-20T23:14:45.698532343Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.69939956Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=867.017µs grafana | logger=migrator t=2024-02-20T23:14:45.705073919Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.706549412Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.478023ms grafana | logger=migrator t=2024-02-20T23:14:45.709344996Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.71100844Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.664494ms grafana | logger=migrator t=2024-02-20T23:14:45.71560822Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-02-20T23:14:45.716481727Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=873.967µs grafana | logger=migrator t=2024-02-20T23:14:45.721609542Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-02-20T23:14:45.721691803Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=85.251µs grafana | logger=migrator t=2024-02-20T23:14:45.726414173Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-02-20T23:14:45.727014118Z level=info msg="Migration successfully executed" id="create test_data table" duration=600.075µs grafana | logger=migrator t=2024-02-20T23:14:45.729598671Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-02-20T23:14:45.730265426Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=666.486µs grafana | logger=migrator t=2024-02-20T23:14:45.734669014Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-02-20T23:14:45.73540282Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=733.616µs grafana | logger=migrator t=2024-02-20T23:14:45.739331105Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-02-20T23:14:45.74001861Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=687.505µs grafana | logger=migrator t=2024-02-20T23:14:45.742843055Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-02-20T23:14:45.743052406Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=209.921µs grafana | logger=migrator t=2024-02-20T23:14:45.746448256Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" prometheus | ts=2024-02-20T23:14:43.145Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-02-20T23:14:43.145Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" prometheus | ts=2024-02-20T23:14:43.145Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-02-20T23:14:43.145Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-02-20T23:14:43.146Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-02-20T23:14:43.146Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-02-20T23:14:43.149Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-02-20T23:14:43.150Z caller=main.go:1039 level=info msg="Starting TSDB ..." prometheus | ts=2024-02-20T23:14:43.151Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-02-20T23:14:43.151Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-02-20T23:14:43.157Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-02-20T23:14:43.157Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.09µs prometheus | ts=2024-02-20T23:14:43.157Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-02-20T23:14:43.158Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-02-20T23:14:43.158Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=40.681µs wal_replay_duration=446.473µs wbl_replay_duration=210ns total_replay_duration=542.394µs prometheus | ts=2024-02-20T23:14:43.163Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-02-20T23:14:43.163Z caller=main.go:1063 level=info msg="TSDB started" prometheus | ts=2024-02-20T23:14:43.163Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-02-20T23:14:43.166Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.137625ms db_storage=1.09µs remote_storage=1.73µs web_handler=210ns query_engine=750ns scrape=2.491539ms scrape_sd=123.241µs notify=41.941µs notify_sd=9.41µs rules=1.41µs tracing=4.35µs prometheus | ts=2024-02-20T23:14:43.166Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-02-20T23:14:43.166Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." kafka | [2024-02-20 23:14:52,972] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:52,973] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:14:52,977] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:52,983] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-02-20 23:14:52,991] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-02-20 23:14:52,993] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-02-20 23:14:52,993] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-02-20 23:14:53,007] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-02-20 23:14:53,007] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,013] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,017] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,019] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,026] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-20 23:14:53,043] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,048] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,052] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-02-20 23:14:53,056] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-02-20 23:14:53,063] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-02-20 23:14:53,066] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-02-20 23:14:53,068] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-02-20 23:14:53,069] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,070] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,070] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,070] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,071] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-02-20 23:14:53,074] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,074] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,074] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,075] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-02-20 23:14:53,075] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,079] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-02-20 23:14:53,079] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-02-20 23:14:53,079] INFO Kafka startTimeMs: 1708470893074 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-02-20 23:14:53,079] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-02-20 23:14:53,080] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-02-20 23:14:53,087] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-02-20 23:14:53,088] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-02-20 23:14:53,096] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-02-20 23:14:53,096] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-02-20 23:14:53,097] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-02-20 23:14:53,097] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-02-20 23:14:53,100] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-02-20 23:14:53,100] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,100] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-02-20 23:14:53,105] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,106] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,106] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,106] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,121] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,136] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:53,177] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-02-20 23:14:53,185] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-20 23:14:53,219] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-02-20 23:14:58,137] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-02-20 23:14:58,138] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-02-20 23:15:17,777] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-02-20 23:15:17,779] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-02-20 23:15:17,792] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-02-20 23:15:17,792] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- kafka | [2024-02-20 23:15:17,835] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(BfQFazPiQayoGUpac3B4xw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(IOZYBLsmQm6YXR5s4m9LhQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-02-20 23:15:17,837] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-02-20 23:15:17,843] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,844] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.2:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.9:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.7:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.8) policy-pap | policy-pap | [2024-02-20T23:15:07.443+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 30 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-02-20T23:15:07.445+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-02-20T23:15:09.346+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-02-20T23:15:09.471+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 113 ms. Found 7 JPA repository interfaces. policy-pap | [2024-02-20T23:15:09.913+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-02-20T23:15:09.914+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-02-20T23:15:10.595+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-02-20T23:15:10.604+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-02-20T23:15:10.605+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-02-20T23:15:10.606+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-pap | [2024-02-20T23:15:10.711+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-02-20T23:15:10.712+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3183 ms policy-pap | [2024-02-20T23:15:11.123+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-02-20T23:15:11.218+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-pap | [2024-02-20T23:15:11.221+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-pap | [2024-02-20T23:15:11.275+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2024-02-20T23:15:11.663+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2024-02-20T23:15:11.684+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-02-20T23:15:11.796+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@397ef2 policy-pap | [2024-02-20T23:15:11.799+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-02-20T23:15:11.830+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-pap | [2024-02-20T23:15:11.831+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-pap | [2024-02-20T23:15:13.831+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2024-02-20T23:15:13.835+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-02-20T23:15:14.351+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository simulator | 2024-02-20 23:14:41,621 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4917 ms. simulator | 2024-02-20 23:14:41,622 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-02-20 23:14:41,624 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-02-20 23:14:41,625 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-20 23:14:41,625 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-20 23:14:41,626 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-02-20 23:14:41,639 INFO Session workerName=node0 simulator | 2024-02-20 23:14:41,695 INFO Using GSON for REST calls simulator | 2024-02-20 23:14:41,709 INFO Started o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE} simulator | 2024-02-20 23:14:41,710 INFO Started SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-02-20 23:14:41,710 INFO Started Server@488eb7f2{STARTING}[11.0.20,sto=0] @1683ms simulator | 2024-02-20 23:14:41,710 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4915 ms. simulator | 2024-02-20 23:14:41,711 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-02-20 23:14:41,713 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-02-20 23:14:41,713 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-20 23:14:41,714 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-02-20 23:14:41,714 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 simulator | 2024-02-20 23:14:41,719 INFO Session workerName=node0 simulator | 2024-02-20 23:14:41,759 INFO Using GSON for REST calls simulator | 2024-02-20 23:14:41,766 INFO Started o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE} simulator | 2024-02-20 23:14:41,767 INFO Started VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-02-20 23:14:41,767 INFO Started Server@6035b93b{STARTING}[11.0.20,sto=0] @1740ms simulator | 2024-02-20 23:14:41,768 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4946 ms. simulator | 2024-02-20 23:14:41,768 INFO org.onap.policy.models.simulators started policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"fc3737ad-ad98-4ffa-9216-698c7518a46d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"3657ce3e-434a-4f8c-8e3a-fde3720bedeb","timestampMs":1708470938996,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:39.005+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"fc3737ad-ad98-4ffa-9216-698c7518a46d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"3657ce3e-434a-4f8c-8e3a-fde3720bedeb","timestampMs":1708470938996,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-02-20T23:15:39.006+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-02-20T23:15:56.179+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.4 - policyadmin [20/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.49.1" policy-apex-pdp | [2024-02-20T23:16:56.093+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.4 - policyadmin [20/Feb/2024:23:16:56 +0000] "GET /metrics HTTP/1.1" 200 10654 "-" "Prometheus/2.49.1" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- kafka | [2024-02-20 23:15:17,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,845] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,846] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,847] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,851] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,851] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,852] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,852] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,852] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,852] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,852] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,852] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,852] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,852] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,853] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,853] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,853] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,853] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,853] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,853] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,853] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,853] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,854] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,854] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,854] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,854] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,854] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,854] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,854] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,854] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,855] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-20 23:15:17,855] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-02-20 23:15:17,855] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-02-20 23:15:17,861] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,861] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,861] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,861] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,861] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,861] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,861] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,862] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,863] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,863] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,863] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,863] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,863] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,863] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,863] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,863] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,864] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:45.747139522Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=684.565µs grafana | logger=migrator t=2024-02-20T23:14:45.750323479Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-02-20T23:14:45.75044616Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=124.061µs grafana | logger=migrator t=2024-02-20T23:14:45.757195088Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-02-20T23:14:45.757991555Z level=info msg="Migration successfully executed" id="create team table" duration=799.057µs grafana | logger=migrator t=2024-02-20T23:14:45.760637198Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-02-20T23:14:45.761406324Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=772.756µs grafana | logger=migrator t=2024-02-20T23:14:45.764073537Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-02-20T23:14:45.765158857Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.0866ms grafana | logger=migrator t=2024-02-20T23:14:45.769915378Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-02-20T23:14:45.776465404Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.546236ms grafana | logger=migrator t=2024-02-20T23:14:45.784858737Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-02-20T23:14:45.785105929Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=250.642µs grafana | logger=migrator t=2024-02-20T23:14:45.788416227Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-02-20T23:14:45.789159034Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=742.777µs grafana | logger=migrator t=2024-02-20T23:14:45.792436822Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-02-20T23:14:45.79333802Z level=info msg="Migration successfully executed" id="create team member table" duration=905.258µs grafana | logger=migrator t=2024-02-20T23:14:45.799340631Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-02-20T23:14:45.800635802Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.298741ms grafana | logger=migrator t=2024-02-20T23:14:45.804253844Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-02-20T23:14:45.805330533Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.073089ms grafana | logger=migrator t=2024-02-20T23:14:45.810281576Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-02-20T23:14:45.811044292Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=762.846µs grafana | logger=migrator t=2024-02-20T23:14:45.813521923Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-02-20T23:14:45.817977532Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.456029ms grafana | logger=migrator t=2024-02-20T23:14:45.821796535Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-02-20T23:14:45.824910752Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.114577ms grafana | logger=migrator t=2024-02-20T23:14:45.831231426Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-02-20T23:14:45.837664232Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=6.439735ms grafana | logger=migrator t=2024-02-20T23:14:45.845521679Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-02-20T23:14:45.846443767Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=919.068µs grafana | logger=migrator t=2024-02-20T23:14:45.852934873Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-02-20T23:14:45.853958332Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.024279ms grafana | logger=migrator t=2024-02-20T23:14:45.860059164Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-02-20T23:14:45.86072997Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=670.846µs grafana | logger=migrator t=2024-02-20T23:14:45.864994757Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-02-20T23:14:45.865929375Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=933.948µs grafana | logger=migrator t=2024-02-20T23:14:45.871609534Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-02-20T23:14:45.872461151Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=851.297µs grafana | logger=migrator t=2024-02-20T23:14:45.878853946Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-02-20T23:14:45.880243168Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.381362ms grafana | logger=migrator t=2024-02-20T23:14:45.885581954Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-02-20T23:14:45.886476952Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=895.068µs grafana | logger=migrator t=2024-02-20T23:14:45.895210107Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-02-20T23:14:45.896258756Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.048529ms grafana | logger=migrator t=2024-02-20T23:14:45.901912065Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-02-20T23:14:45.902630201Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=718.486µs grafana | logger=migrator t=2024-02-20T23:14:45.908925245Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2024-02-20T23:14:45.909298458Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=373.233µs grafana | logger=migrator t=2024-02-20T23:14:45.915512032Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-02-20T23:14:45.91647464Z level=info msg="Migration successfully executed" id="create tag table" duration=962.478µs grafana | logger=migrator t=2024-02-20T23:14:45.921153371Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-02-20T23:14:45.922619223Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.465172ms grafana | logger=migrator t=2024-02-20T23:14:45.926202324Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-02-20T23:14:45.926828999Z level=info msg="Migration successfully executed" id="create login attempt table" duration=626.495µs grafana | logger=migrator t=2024-02-20T23:14:45.933513927Z level=info msg="Executing migration" id="add index login_attempt.username" policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:45.935350933Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.837916ms grafana | logger=migrator t=2024-02-20T23:14:45.941693187Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-02-20T23:14:45.942563395Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=868.848µs grafana | logger=migrator t=2024-02-20T23:14:45.948672948Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-20T23:14:45.968301207Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.62822ms grafana | logger=migrator t=2024-02-20T23:14:45.975227726Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-02-20T23:14:45.9757108Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=483.214µs grafana | logger=migrator t=2024-02-20T23:14:45.979944277Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-02-20T23:14:45.981317769Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.373842ms grafana | logger=migrator t=2024-02-20T23:14:45.987420121Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-02-20T23:14:45.987698394Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=278.563µs grafana | logger=migrator t=2024-02-20T23:14:45.995320939Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-02-20T23:14:45.996124946Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=807.827µs grafana | logger=migrator t=2024-02-20T23:14:46.000764406Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-02-20T23:14:46.002797883Z level=info msg="Migration successfully executed" id="create user auth table" duration=2.032947ms grafana | logger=migrator t=2024-02-20T23:14:46.011315681Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-02-20T23:14:46.012184818Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=869.067µs grafana | logger=migrator t=2024-02-20T23:14:46.017639483Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-02-20T23:14:46.017737363Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=99.12µs grafana | logger=migrator t=2024-02-20T23:14:46.027956956Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-02-20T23:14:46.03592307Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.965894ms grafana | logger=migrator t=2024-02-20T23:14:46.041324034Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-02-20T23:14:46.045294966Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.970832ms grafana | logger=migrator t=2024-02-20T23:14:46.052638575Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-02-20T23:14:46.057588835Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.94854ms grafana | logger=migrator t=2024-02-20T23:14:46.062936027Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-02-20T23:14:46.067828187Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.89191ms grafana | logger=migrator t=2024-02-20T23:14:46.077351163Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-02-20T23:14:46.078731154Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.378811ms grafana | logger=migrator t=2024-02-20T23:14:46.084657382Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-02-20T23:14:46.090332508Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.675926ms grafana | logger=migrator t=2024-02-20T23:14:46.094311459Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-02-20T23:14:46.095023855Z level=info msg="Migration successfully executed" id="create server_lock table" duration=713.826µs grafana | logger=migrator t=2024-02-20T23:14:46.104950975Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2024-02-20T23:14:46.106429027Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.477522ms grafana | logger=migrator t=2024-02-20T23:14:46.184821026Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2024-02-20T23:14:46.186073826Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.25561ms grafana | logger=migrator t=2024-02-20T23:14:46.190557232Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2024-02-20T23:14:46.192098184Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.541342ms grafana | logger=migrator t=2024-02-20T23:14:46.196680671Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2024-02-20T23:14:46.198518806Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.838775ms grafana | logger=migrator t=2024-02-20T23:14:46.203347155Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2024-02-20T23:14:46.204456643Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.109368ms grafana | logger=migrator t=2024-02-20T23:14:46.208873919Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2024-02-20T23:14:46.214071721Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.197652ms grafana | logger=migrator t=2024-02-20T23:14:46.219916858Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2024-02-20T23:14:46.223508367Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=3.591029ms grafana | logger=migrator t=2024-02-20T23:14:46.230778735Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2024-02-20T23:14:46.232010955Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.23229ms grafana | logger=migrator t=2024-02-20T23:14:46.238826849Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2024-02-20T23:14:46.240263531Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.431942ms grafana | logger=migrator t=2024-02-20T23:14:46.24383078Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,865] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,866] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,866] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,866] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,868] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,868] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,868] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,868] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-02-20 23:15:17,868] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-02-20 23:15:18,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,073] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,079] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,079] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,079] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,080] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-02-20T23:15:14.754+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-02-20T23:15:14.843+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-02-20T23:15:15.109+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 364f6f57-838f-467b-8ccc-3ae2767c47b5 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-20 23:15:18,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,081] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,082] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-02-20T23:15:15.262+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-02-20T23:15:15.262+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-02-20T23:15:15.262+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470915261 policy-pap | [2024-02-20T23:15:15.265+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-1, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-02-20T23:15:15.265+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:46.244602506Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=771.686µs grafana | logger=migrator t=2024-02-20T23:14:46.24890306Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2024-02-20T23:14:46.250023309Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.116869ms grafana | logger=migrator t=2024-02-20T23:14:46.259169773Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2024-02-20T23:14:46.259266364Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=97.531µs grafana | logger=migrator t=2024-02-20T23:14:46.266120078Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2024-02-20T23:14:46.266250199Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=130.141µs grafana | logger=migrator t=2024-02-20T23:14:46.272258678Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2024-02-20T23:14:46.273438377Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.179059ms grafana | logger=migrator t=2024-02-20T23:14:46.277669911Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-02-20T23:14:46.279544456Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.873435ms grafana | logger=migrator t=2024-02-20T23:14:46.284429336Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-02-20T23:14:46.285339453Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=910.097µs grafana | logger=migrator t=2024-02-20T23:14:46.290482124Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2024-02-20T23:14:46.290582485Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=100.941µs grafana | logger=migrator t=2024-02-20T23:14:46.295936328Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-02-20T23:14:46.29739144Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.454991ms grafana | logger=migrator t=2024-02-20T23:14:46.309465476Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-02-20T23:14:46.310410874Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=945.388µs grafana | logger=migrator t=2024-02-20T23:14:46.334482367Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-02-20T23:14:46.336766745Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=2.283958ms grafana | logger=migrator t=2024-02-20T23:14:46.342444621Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-02-20T23:14:46.343133557Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=689.486µs grafana | logger=migrator t=2024-02-20T23:14:46.34733138Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2024-02-20T23:14:46.352981286Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.649546ms grafana | logger=migrator t=2024-02-20T23:14:46.359272006Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2024-02-20T23:14:46.360154503Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=882.447µs grafana | logger=migrator t=2024-02-20T23:14:46.370338495Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2024-02-20T23:14:46.370424316Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=86.511µs grafana | logger=migrator t=2024-02-20T23:14:46.375019483Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2024-02-20T23:14:46.376296963Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.275781ms grafana | logger=migrator t=2024-02-20T23:14:46.380678898Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2024-02-20T23:14:46.381622416Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=940.988µs grafana | logger=migrator t=2024-02-20T23:14:46.393186368Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2024-02-20T23:14:46.394406158Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.22507ms grafana | logger=migrator t=2024-02-20T23:14:46.404203887Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-02-20T23:14:46.404334448Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=132.351µs grafana | logger=migrator t=2024-02-20T23:14:46.409927063Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2024-02-20T23:14:46.41088Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=949.567µs grafana | logger=migrator t=2024-02-20T23:14:46.414732771Z level=info msg="Executing migration" id="create alert_instance table" policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-02-20T23:14:46.416117903Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.384622ms grafana | logger=migrator t=2024-02-20T23:14:46.425096795Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2024-02-20T23:14:46.4269694Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.872015ms grafana | logger=migrator t=2024-02-20T23:14:46.437096321Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2024-02-20T23:14:46.438919516Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.823155ms grafana | logger=migrator t=2024-02-20T23:14:46.445313657Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2024-02-20T23:14:46.452809327Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.49717ms kafka | [2024-02-20 23:15:18,083] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,084] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,084] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,084] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,084] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,084] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,084] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,084] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,084] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-20 23:15:18,088] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-02-20 23:15:18,092] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-02-20 23:15:18,092] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-02-20 23:15:18,092] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-02-20 23:15:18,093] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-02-20 23:15:18,093] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-02-20 23:15:18,093] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-02-20 23:15:18,093] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-02-20 23:15:18,094] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-02-20 23:15:18,094] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-02-20 23:15:18,094] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-02-20 23:15:18,094] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-02-20 23:15:18,097] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-02-20T23:15:15.271+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-02-20T23:15:15.271+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-20 23:15:18,097] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-02-20 23:15:18,097] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-02-20 23:15:18,097] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-02-20 23:15:18,098] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-02-20 23:15:18,098] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-02-20 23:15:18,098] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-02-20 23:15:18,098] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-02-20 23:15:18,098] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-02-20 23:15:18,098] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-02-20 23:15:18,098] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-02-20 23:15:18,098] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-02-20 23:15:18,099] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-02-20 23:15:18,099] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-02-20 23:15:18,099] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-02-20 23:15:18,099] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-02-20 23:15:18,099] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-02-20 23:15:18,099] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-02-20 23:15:18,099] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-02-20 23:15:18,100] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | [2024-02-20T23:15:15.271+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470915271 policy-pap | [2024-02-20T23:15:15.271+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- kafka | [2024-02-20 23:15:18,100] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-02-20 23:15:18,100] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-02-20 23:15:18,100] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-02-20 23:15:18,100] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-02-20 23:15:18,100] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-02-20 23:15:18,100] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-02-20 23:15:18,100] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-02-20 23:15:18,101] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-02-20 23:15:18,101] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-02-20 23:15:18,101] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-02-20 23:15:18,101] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-02-20 23:15:18,101] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-02-20 23:15:18,101] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-02-20 23:15:18,101] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-02-20 23:15:18,102] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-02-20 23:15:18,102] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-02-20 23:15:18,102] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-02-20 23:15:18,102] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-02-20 23:15:18,102] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-02-20 23:15:18,103] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.463921926Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2024-02-20T23:14:46.464950595Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.028978ms grafana | logger=migrator t=2024-02-20T23:14:46.472157053Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-02-20T23:14:46.473783546Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.626603ms grafana | logger=migrator t=2024-02-20T23:14:46.480243757Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2024-02-20T23:14:46.524485422Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=44.236025ms grafana | logger=migrator t=2024-02-20T23:14:46.532151924Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2024-02-20T23:14:46.581081397Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=48.927583ms grafana | logger=migrator t=2024-02-20T23:14:46.586033666Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2024-02-20T23:14:46.586989084Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=955.158µs grafana | logger=migrator t=2024-02-20T23:14:46.59148199Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-02-20T23:14:46.592551449Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.070119ms grafana | logger=migrator t=2024-02-20T23:14:46.597799671Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2024-02-20T23:14:46.603977671Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.17967ms grafana | logger=migrator t=2024-02-20T23:14:46.607188206Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2024-02-20T23:14:46.611421351Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.233265ms grafana | logger=migrator t=2024-02-20T23:14:46.614431575Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2024-02-20T23:14:46.615332482Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=900.327µs grafana | logger=migrator t=2024-02-20T23:14:46.619576346Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-02-20T23:14:46.620686235Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.110109ms grafana | logger=migrator t=2024-02-20T23:14:46.624125883Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2024-02-20T23:14:46.625458923Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.332691ms grafana | logger=migrator t=2024-02-20T23:14:46.628706189Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2024-02-20T23:14:46.630259882Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.552782ms grafana | logger=migrator t=2024-02-20T23:14:46.634641937Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2024-02-20T23:14:46.634753408Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=111.981µs grafana | logger=migrator t=2024-02-20T23:14:46.638010934Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2024-02-20T23:14:46.646296521Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.283207ms grafana | logger=migrator t=2024-02-20T23:14:46.6500163Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2024-02-20T23:14:46.658344997Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=8.328177ms grafana | logger=migrator t=2024-02-20T23:14:46.662277779Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2024-02-20T23:14:46.667707162Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.429753ms grafana | logger=migrator t=2024-02-20T23:14:46.671794715Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2024-02-20T23:14:46.672554121Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=758.926µs grafana | logger=migrator t=2024-02-20T23:14:46.676780005Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2024-02-20T23:14:46.678472089Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.692443ms grafana | logger=migrator t=2024-02-20T23:14:46.682730993Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2024-02-20T23:14:46.690318604Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.585761ms grafana | logger=migrator t=2024-02-20T23:14:46.693209137Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2024-02-20T23:14:46.699151515Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.939848ms grafana | logger=migrator t=2024-02-20T23:14:46.703381159Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2024-02-20T23:14:46.704334566Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=953.267µs kafka | [2024-02-20 23:15:18,106] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2024-02-20 23:15:18,107] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,107] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,107] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,108] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,109] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,110] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,111] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.70733292Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2024-02-20T23:14:46.713127587Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.794387ms grafana | logger=migrator t=2024-02-20T23:14:46.71607392Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2024-02-20T23:14:46.721975308Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.900858ms grafana | logger=migrator t=2024-02-20T23:14:46.726688926Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2024-02-20T23:14:46.726752386Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=64.19µs grafana | logger=migrator t=2024-02-20T23:14:46.729610419Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2024-02-20T23:14:46.730553967Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=943.528µs grafana | logger=migrator t=2024-02-20T23:14:46.733565021Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2024-02-20T23:14:46.735290655Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.724914ms grafana | logger=migrator t=2024-02-20T23:14:46.740058913Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2024-02-20T23:14:46.742027849Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.967196ms grafana | logger=migrator t=2024-02-20T23:14:46.745497427Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-02-20T23:14:46.745562707Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=67.68µs grafana | logger=migrator t=2024-02-20T23:14:46.751228783Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2024-02-20T23:14:46.757390122Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.16765ms grafana | logger=migrator t=2024-02-20T23:14:46.761696147Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2024-02-20T23:14:46.766013671Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.319865ms grafana | logger=migrator t=2024-02-20T23:14:46.768716033Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2024-02-20T23:14:46.774908303Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.19018ms grafana | logger=migrator t=2024-02-20T23:14:46.777652245Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2024-02-20T23:14:46.784121857Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.469142ms grafana | logger=migrator t=2024-02-20T23:14:46.793370881Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2024-02-20T23:14:46.802903808Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=9.533537ms grafana | logger=migrator t=2024-02-20T23:14:46.812167492Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2024-02-20T23:14:46.812215082Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=46.1µs grafana | logger=migrator t=2024-02-20T23:14:46.818482533Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2024-02-20T23:14:46.819603942Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.1208ms grafana | logger=migrator t=2024-02-20T23:14:46.824618322Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2024-02-20T23:14:46.833866246Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.248024ms grafana | logger=migrator t=2024-02-20T23:14:46.888462404Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2024-02-20T23:14:46.888525205Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=62.931µs grafana | logger=migrator t=2024-02-20T23:14:46.892064353Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2024-02-20T23:14:46.898140812Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.075869ms grafana | logger=migrator t=2024-02-20T23:14:46.901372478Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2024-02-20T23:14:46.902407026Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.033588ms grafana | logger=migrator t=2024-02-20T23:14:46.907187085Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2024-02-20T23:14:46.913304214Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.117869ms grafana | logger=migrator t=2024-02-20T23:14:46.917567418Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-02-20T23:14:46.918923159Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.358631ms grafana | logger=migrator t=2024-02-20T23:14:46.923137863Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2024-02-20T23:14:46.924617314Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.479811ms grafana | logger=migrator t=2024-02-20T23:14:46.930765484Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2024-02-20T23:14:46.942902771Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=12.132187ms grafana | logger=migrator t=2024-02-20T23:14:46.945821735Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2024-02-20T23:14:46.946380549Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=556.684µs grafana | logger=migrator t=2024-02-20T23:14:46.949076581Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2024-02-20T23:14:46.949832237Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=757.356µs grafana | logger=migrator t=2024-02-20T23:14:46.952508359Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2024-02-20T23:14:46.953279115Z level=info msg="Migration successfully executed" id="create alert_image table" duration=770.426µs grafana | logger=migrator t=2024-02-20T23:14:46.957355437Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2024-02-20T23:14:46.958453326Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.097879ms grafana | logger=migrator t=2024-02-20T23:14:46.961117898Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2024-02-20T23:14:46.961188468Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=71.31µs kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,112] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-02-20 23:15:18,113] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-02-20 23:15:18,127] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,128] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,129] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,129] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-02-20 23:15:18,129] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-pap | [2024-02-20T23:15:15.598+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-02-20T23:15:15.733+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-02-20T23:15:16.002+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@451a4187, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@565c887e, org.springframework.security.web.context.SecurityContextHolderFilter@8636cf4, org.springframework.security.web.header.HeaderWriterFilter@33a8f553, org.springframework.security.web.authentication.logout.LogoutFilter@cfc4601, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3361d286, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@4f65af91, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@8ee1404, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5c215642, org.springframework.security.web.access.ExceptionTranslationFilter@72240290, org.springframework.security.web.access.intercept.AuthorizationFilter@426913c4] policy-pap | [2024-02-20T23:15:17.009+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-02-20T23:15:17.153+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-02-20T23:15:17.174+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-02-20T23:15:17.193+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-02-20T23:15:17.193+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-02-20T23:15:17.194+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-02-20T23:15:17.196+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-02-20T23:15:17.196+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-02-20T23:15:17.196+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-02-20T23:15:17.196+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-02-20T23:15:17.201+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=364f6f57-838f-467b-8ccc-3ae2767c47b5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4440750 policy-pap | [2024-02-20T23:15:17.213+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=364f6f57-838f-467b-8ccc-3ae2767c47b5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-02-20T23:15:17.214+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = 364f6f57-838f-467b-8ccc-3ae2767c47b5 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME grafana | logger=migrator t=2024-02-20T23:14:46.964025261Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2024-02-20T23:14:46.964989929Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=964.358µs policy-pap | max.poll.records = 500 policy-db-migrator | -------------- kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.970040599Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.971559511Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.521512ms policy-pap | metric.reporters = [] policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:46.974640746Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-pap | metrics.num.samples = 2 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.975256931Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-pap | metrics.recording.level = INFO policy-db-migrator | -------------- kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.980690785Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.981188099Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=497.614µs policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | -------------- kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.986647093Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:46.990341522Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=3.693909ms policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.024216468Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.034299804Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=10.083386ms policy-pap | request.timeout.ms = 30000 policy-db-migrator | -------------- kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.037889435Z level=info msg="Executing migration" id="create library_element table v1" policy-pap | retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.038624682Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=735.457µs policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-20T23:14:47.045247978Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-db-migrator | policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-02-20 23:15:18,130] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.046669351Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.421103ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.049875579Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-02-20T23:14:47.051094699Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.21891ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.054220296Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.055458937Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.239421ms kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) grafana | logger=migrator t=2024-02-20T23:14:47.059683483Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.060805873Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.12215ms policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-20T23:14:47.064473515Z level=info msg="Executing migration" id="increase max description length to 2048" policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.064527655Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=55.39µs policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.067845844Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-20T23:14:47.067966225Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=117.011µs policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-20T23:14:47.07316522Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.073789325Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=626.115µs policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.077636918Z level=info msg="Executing migration" id="create data_keys table" policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.079117071Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.479823ms policy-db-migrator | policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.082995715Z level=info msg="Executing migration" id="create secrets table" policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-02-20T23:14:47.083921483Z level=info msg="Migration successfully executed" id="create secrets table" duration=929.428µs kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.08704946Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.133781374Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=46.727434ms policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.140612664Z level=info msg="Executing migration" id="add name column into data_keys" policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-02-20T23:14:47.150387698Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.771434ms policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.153282433Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.158309977Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=5.023954ms policy-pap | sasl.mechanism = GSSAPI kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.16098451Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-02-20T23:14:47.20493452Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=43.94341ms policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.209866433Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-02-20T23:14:47.256262975Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=46.393092ms policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.269981214Z level=info msg="Executing migration" id="create kv_store table v1" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-02-20 23:15:18,131] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.27076156Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=783.417µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.275275Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2024-02-20T23:14:47.276077327Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=802.337µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | DROP TABLE pdpstatistics kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.278987342Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.279186083Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=198.591µs policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-20T23:14:47.282175939Z level=info msg="Executing migration" id="create permission table" policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.282800185Z level=info msg="Migration successfully executed" id="create permission table" duration=626.156µs policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.287450995Z level=info msg="Executing migration" id="add unique index permission.role_id" policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats grafana | logger=migrator t=2024-02-20T23:14:47.289055369Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.604344ms policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.292370057Z level=info msg="Executing migration" id="add unique index role_id_action_scope" policy-pap | security.providers = null kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.293456497Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.088ms policy-pap | send.buffer.bytes = 131072 kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.296419283Z level=info msg="Executing migration" id="create role table" policy-pap | session.timeout.ms = 45000 kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0120-statistics_sequence.sql grafana | logger=migrator t=2024-02-20T23:14:47.297734794Z level=info msg="Migration successfully executed" id="create role table" duration=1.315421ms grafana | logger=migrator t=2024-02-20T23:14:47.30309144Z level=info msg="Executing migration" id="add column display_name" policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-20T23:14:47.310496275Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.404185ms policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-02-20T23:14:47.313972794Z level=info msg="Executing migration" id="add column group_name" policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-20T23:14:47.321054556Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.082802ms policy-pap | ssl.cipher.suites = null kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-20T23:14:47.324181343Z level=info msg="Executing migration" id="add index role.org_id" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-02-20T23:14:47.324915079Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=734.286µs policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-20 23:15:18,132] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | name version grafana | logger=migrator t=2024-02-20T23:14:47.330341726Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-pap | ssl.engine.factory.class = null kafka | [2024-02-20 23:15:18,133] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policyadmin 1300 grafana | logger=migrator t=2024-02-20T23:14:47.331371225Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.029579ms policy-db-migrator | ID script operation from_version to_version tag success atTime policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-20T23:14:47.335691452Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.336752912Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.0635ms policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.339681657Z level=info msg="Executing migration" id="create team role table" kafka | [2024-02-20 23:15:18,169] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.340440564Z level=info msg="Migration successfully executed" id="create team role table" duration=758.667µs kafka | [2024-02-20 23:15:18,169] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.346461026Z level=info msg="Executing migration" id="add index team_role.org_id" kafka | [2024-02-20 23:15:18,169] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.348198411Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.736985ms kafka | [2024-02-20 23:15:18,169] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.351329898Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-20T23:14:47.353201964Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.871796ms kafka | [2024-02-20 23:15:18,169] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-20T23:14:47.356282381Z level=info msg="Executing migration" id="add index team_role.team_id" policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 policy-pap | ssl.provider = null grafana | logger=migrator t=2024-02-20T23:14:47.357453531Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.17123ms policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 policy-pap | ssl.secure.random.implementation = null policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.364094308Z level=info msg="Executing migration" id="create user role table" policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 policy-pap | ssl.truststore.certificates = null policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.364625233Z level=info msg="Migration successfully executed" id="create user role table" duration=531.265µs policy-pap | ssl.truststore.location = null policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-02-20T23:14:47.367216795Z level=info msg="Executing migration" id="add index user_role.org_id" policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 grafana | logger=migrator t=2024-02-20T23:14:47.367952352Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=735.487µs policy-pap | ssl.truststore.type = JKS policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:48 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-20T23:14:47.370938198Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.372616792Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.677704ms policy-pap | grafana | logger=migrator t=2024-02-20T23:14:47.377243522Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2024-02-20T23:14:47.382167295Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=4.926283ms kafka | [2024-02-20 23:15:18,169] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | [2024-02-20T23:15:17.220+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.386555723Z level=info msg="Executing migration" id="create builtin role table" policy-pap | [2024-02-20T23:15:17.220+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-20T23:14:47.38733222Z level=info msg="Migration successfully executed" id="create builtin role table" duration=776.807µs policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.391254404Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-pap | [2024-02-20T23:15:17.220+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470917220 grafana | logger=migrator t=2024-02-20T23:14:47.397020563Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=5.764209ms policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 policy-pap | [2024-02-20T23:15:17.220+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.402441Z level=info msg="Executing migration" id="add index builtin_role.name" kafka | [2024-02-20 23:15:18,169] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | [2024-02-20T23:15:17.221+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.40352042Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.0792ms kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-pap | [2024-02-20T23:15:17.221+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=af9ac749-62b3-4dcb-877d-634ce0823203, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1a45e29f policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.406440875Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | [2024-02-20T23:15:17.221+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=af9ac749-62b3-4dcb-877d-634ce0823203, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.413889689Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.448374ms kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | [2024-02-20T23:15:17.221+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.417682892Z level=info msg="Executing migration" id="add index builtin_role.org_id" kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | allow.auto.create.topics = true policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.419949321Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.266569ms policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.425868142Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-pap | auto.include.jmx.reporter = true kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.428329914Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.461542ms policy-pap | auto.offset.reset = latest kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.431703333Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.434123704Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=2.422141ms policy-pap | check.crcs = true kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.43941502Z level=info msg="Executing migration" id="add unique index role.uid" policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.44064823Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.2332ms policy-pap | client.id = consumer-policy-pap-4 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.446678133Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2024-02-20T23:14:47.448236856Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.558384ms policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 policy-pap | client.rack = grafana | logger=migrator t=2024-02-20T23:14:47.455133926Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-20T23:14:47.456399947Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.264741ms policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-02-20T23:14:47.463295767Z level=info msg="Executing migration" id="add column hidden to role table" policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.470550849Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.256043ms policy-pap | exclude.internal.topics = true policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.475307581Z level=info msg="Executing migration" id="permission kind migration" policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-20T23:14:47.483185709Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.877788ms policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 policy-pap | fetch.min.bytes = 1 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:49 grafana | logger=migrator t=2024-02-20T23:14:47.485874132Z level=info msg="Executing migration" id="permission attribute migration" policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-02-20T23:14:47.49143243Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.557808ms policy-pap | group.instance.id = null grafana | logger=migrator t=2024-02-20T23:14:47.495677367Z level=info msg="Executing migration" id="permission identifier migration" policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.505682964Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=10.005047ms grafana | logger=migrator t=2024-02-20T23:14:47.510606176Z level=info msg="Executing migration" id="add permission identifier index" policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.511904877Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.298111ms policy-pap | interceptor.classes = [] policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.514910343Z level=info msg="Executing migration" id="create query_history table v1" policy-pap | internal.leave.group.on.close = true policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.515950722Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.040449ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.519723825Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-pap | isolation.level = read_uncommitted policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.521137707Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.399032ms policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.525574866Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.525640826Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=60.8µs policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.528280509Z level=info msg="Executing migration" id="rbac disabled migrator" policy-pap | max.poll.records = 500 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.528327509Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=47.93µs policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.535087318Z level=info msg="Executing migration" id="teams permissions migration" policy-pap | metric.reporters = [] policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.535890775Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=807.677µs policy-pap | metrics.num.samples = 2 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.540619226Z level=info msg="Executing migration" id="dashboard permissions" policy-pap | metrics.recording.level = INFO policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.541161891Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=543.275µs policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.544032955Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.544674821Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=641.976µs policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.547560206Z level=info msg="Executing migration" id="drop managed folder create actions" policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.547746978Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=187.182µs policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.549795106Z level=info msg="Executing migration" id="alerting notification permissions" policy-pap | request.timeout.ms = 30000 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-20T23:14:47.550153699Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=358.453µs kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-20T23:14:47.554567797Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.555387074Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=819.047µs policy-pap | sasl.jaas.config = null kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.558916224Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.559976643Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.062109ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:50 grafana | logger=migrator t=2024-02-20T23:14:47.562690887Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-pap | sasl.kerberos.service.name = null policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-20T23:14:47.570980269Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.287022ms grafana | logger=migrator t=2024-02-20T23:14:47.576061603Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.576125693Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=64.65µs policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.578223602Z level=info msg="Executing migration" id="create correlation table v1" policy-pap | sasl.login.class = null policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 grafana | logger=migrator t=2024-02-20T23:14:47.579060519Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=836.817µs policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 grafana | logger=migrator t=2024-02-20T23:14:47.581968044Z level=info msg="Executing migration" id="add index correlations.uid" policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 grafana | logger=migrator t=2024-02-20T23:14:47.583100124Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.13147ms policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.589084296Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.590193145Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.108809ms policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.592735177Z level=info msg="Executing migration" id="add correlation config column" policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.603395909Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.662072ms policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.60581494Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.606549177Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=733.827µs policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 grafana | logger=migrator t=2024-02-20T23:14:47.612143375Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-20T23:14:47.613101254Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=956.679µs policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-20T23:14:47.615972138Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-20T23:14:47.64737117Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=31.400232ms policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-20T23:14:47.650040813Z level=info msg="Executing migration" id="create correlation v2" policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-20T23:14:47.650706449Z level=info msg="Migration successfully executed" id="create correlation v2" duration=665.636µs policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-20T23:14:47.655332679Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.65660349Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.271461ms policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:51 policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-20T23:14:47.659646807Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-02-20 23:15:18,170] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:52 policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-20T23:14:47.660806547Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.15937ms kafka | [2024-02-20 23:15:18,171] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:52 policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-20T23:14:47.665733139Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:52 policy-pap | security.providers = null grafana | logger=migrator t=2024-02-20T23:14:47.667062391Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.329272ms policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-20T23:14:47.669639893Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:52 policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-02-20T23:14:47.669886005Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=246.582µs policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:52 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-20 23:15:18,171] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.672596789Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:52 policy-pap | ssl.cipher.suites = null kafka | [2024-02-20 23:15:18,172] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-02-20T23:14:47.673422426Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=824.977µs policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:52 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-20 23:15:18,172] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.678099696Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2002242314480800u 1 2024-02-20 23:14:52 policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-20T23:14:47.686211606Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.1274ms policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-20T23:14:47.689076771Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-20T23:14:47.689841688Z level=info msg="Migration successfully executed" id="create entity_events table" duration=764.617µs policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-20 23:15:18,220] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.692652202Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-20 23:15:18,233] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.69357115Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=916.388µs policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.keystore.key = null kafka | [2024-02-20 23:15:18,240] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.69813604Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.keystore.location = null kafka | [2024-02-20 23:15:18,240] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.698578864Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.keystore.password = null kafka | [2024-02-20 23:15:18,247] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.701295357Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.keystore.type = JKS kafka | [2024-02-20 23:15:18,263] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.701781452Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-20 23:15:18,264] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.704479565Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.provider = null kafka | [2024-02-20 23:15:18,264] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.705326542Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=847.007µs policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-20 23:15:18,264] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.709998183Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-20 23:15:18,264] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.710959311Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=959.678µs policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2002242314480900u 1 2024-02-20 23:14:52 policy-pap | ssl.truststore.certificates = null kafka | [2024-02-20 23:15:18,276] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.713696095Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 policy-pap | ssl.truststore.location = null kafka | [2024-02-20 23:15:18,276] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.714878065Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.17996ms policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 policy-pap | ssl.truststore.password = null kafka | [2024-02-20 23:15:18,277] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.717669479Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 policy-pap | ssl.truststore.type = JKS kafka | [2024-02-20 23:15:18,277] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.718851849Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.18264ms policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-02-20 23:15:18,277] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.723339158Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 policy-pap | grafana | logger=migrator t=2024-02-20T23:14:47.724414927Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.076009ms policy-pap | [2024-02-20T23:15:17.226+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 grafana | logger=migrator t=2024-02-20T23:14:47.726643336Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-pap | [2024-02-20T23:15:17.226+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-20T23:14:47.727715086Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.07196ms policy-pap | [2024-02-20T23:15:17.226+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470917226 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,287] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.731161296Z level=info msg="Executing migration" id="Drop public config table" policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,293] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:17.226+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-02-20T23:14:47.732111574Z level=info msg="Migration successfully executed" id="Drop public config table" duration=950.078µs policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2002242314481000u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,294] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:17.226+00:00|INFO|ServiceManager|main] Policy PAP starting topics grafana | logger=migrator t=2024-02-20T23:14:47.736720054Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2002242314481100u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,294] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:17.226+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=af9ac749-62b3-4dcb-877d-634ce0823203, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-02-20T23:14:47.737760243Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.040249ms policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2002242314481200u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,294] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:17.226+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=364f6f57-838f-467b-8ccc-3ae2767c47b5, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-02-20T23:14:47.740401586Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2002242314481200u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,304] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:17.227+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3f30bb3f-5302-486b-8569-1b9775974abc, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-02-20T23:14:47.741561626Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.15858ms policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2002242314481200u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,305] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:17.245+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-02-20T23:14:47.746033435Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2002242314481200u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,305] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-pap | acks = -1 grafana | logger=migrator t=2024-02-20T23:14:47.747233945Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.19981ms policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2002242314481300u 1 2024-02-20 23:14:53 kafka | [2024-02-20 23:15:18,305] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-02-20T23:14:47.75014856Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2002242314481300u 1 2024-02-20 23:14:53 policy-pap | batch.size = 16384 kafka | [2024-02-20 23:15:18,305] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.75125719Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.10848ms policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2002242314481300u 1 2024-02-20 23:14:53 policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-02-20 23:15:18,314] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.753887823Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-pap | buffer.memory = 33554432 kafka | [2024-02-20 23:15:18,317] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.78366485Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=29.776977ms policy-db-migrator | policyadmin: OK @ 1300 policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-02-20 23:15:18,317] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.788518902Z level=info msg="Executing migration" id="add annotations_enabled column" policy-pap | client.id = producer-1 kafka | [2024-02-20 23:15:18,317] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.796075378Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.555876ms policy-pap | compression.type = none kafka | [2024-02-20 23:15:18,317] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.798869972Z level=info msg="Executing migration" id="add time_selection_enabled column" policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-20 23:15:18,329] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.805894193Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.025051ms policy-pap | delivery.timeout.ms = 120000 kafka | [2024-02-20 23:15:18,330] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.808809558Z level=info msg="Executing migration" id="delete orphaned public dashboards" policy-pap | enable.idempotence = true kafka | [2024-02-20 23:15:18,330] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.80907818Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=268.442µs policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-20T23:14:47.811789734Z level=info msg="Executing migration" id="add share column" policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-20T23:14:47.820216307Z level=info msg="Migration successfully executed" id="add share column" duration=8.424173ms policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-02-20T23:14:47.824709466Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-02-20T23:14:47.824962028Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=251.562µs policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-02-20T23:14:47.827356449Z level=info msg="Executing migration" id="create file table" policy-pap | metadata.max.age.ms = 300000 kafka | [2024-02-20 23:15:18,330] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.828067815Z level=info msg="Migration successfully executed" id="create file table" duration=709.396µs policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-02-20 23:15:18,330] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.830814989Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-20T23:14:47.832046479Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.23122ms kafka | [2024-02-20 23:15:18,341] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-20T23:14:47.836639879Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-20T23:14:47.83788554Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.245481ms policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-20T23:14:47.840683794Z level=info msg="Executing migration" id="create file_meta table" policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-02-20T23:14:47.841571432Z level=info msg="Migration successfully executed" id="create file_meta table" duration=887.128µs grafana | logger=migrator t=2024-02-20T23:14:47.844578618Z level=info msg="Executing migration" id="file table idx: path key" policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-02-20T23:14:47.846127821Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.547903ms policy-pap | partitioner.class = null grafana | logger=migrator t=2024-02-20T23:14:47.852429276Z level=info msg="Executing migration" id="set path collation in file table" policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-02-20T23:14:47.852518597Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=88.441µs policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-02-20T23:14:47.855142109Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-20T23:14:47.855355061Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=213.032µs policy-pap | reconnect.backoff.ms = 50 kafka | [2024-02-20 23:15:18,342] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.858316187Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-02-20T23:14:47.859153054Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=834.997µs policy-pap | request.timeout.ms = 30000 kafka | [2024-02-20 23:15:18,342] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.862007019Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" policy-pap | retries = 2147483647 kafka | [2024-02-20 23:15:18,342] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.862354662Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=347.123µs policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-20T23:14:47.866623329Z level=info msg="Executing migration" id="RBAC action name migrator" policy-pap | sasl.client.callback.handler.class = null kafka | [2024-02-20 23:15:18,342] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.867465916Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=842.407µs policy-pap | sasl.jaas.config = null kafka | [2024-02-20 23:15:18,354] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.87025509Z level=info msg="Executing migration" id="Add UID column to playlist" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-20T23:14:47.881522958Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=11.265358ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-20 23:15:18,354] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.884761626Z level=info msg="Executing migration" id="Update uid column values in playlist" policy-pap | sasl.kerberos.service.name = null kafka | [2024-02-20 23:15:18,355] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.885033808Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=270.252µs policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-20T23:14:47.893663693Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-02-20 23:15:18,355] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.895143886Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.486343ms policy-pap | sasl.login.callback.handler.class = null kafka | [2024-02-20 23:15:18,355] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.897520646Z level=info msg="Executing migration" id="update group index for alert rules" policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-02-20T23:14:47.898021681Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=501.855µs grafana | logger=migrator t=2024-02-20T23:14:47.900387481Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-20 23:15:18,366] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.900725524Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=338.053µs policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-02-20 23:15:18,366] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.903113964Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-20 23:15:18,366] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.903679439Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=565.175µs policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-20T23:14:47.907367451Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-20 23:15:18,367] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-20T23:14:47.916129317Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.760936ms policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-20 23:15:18,367] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-20T23:14:47.919050183Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-20T23:14:47.92570226Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.650877ms policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-20 23:15:18,373] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-20T23:14:47.929906697Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-pap | sasl.mechanism = GSSAPI kafka | [2024-02-20 23:15:18,373] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-20T23:14:47.931129557Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.22277ms policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-20T23:14:47.934558987Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-20T23:14:48.047450344Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=112.884697ms policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-20T23:14:48.051293222Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-20T23:14:48.053130855Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.842053ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-20T23:14:48.056139393Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-20T23:14:48.057438579Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.298866ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-20T23:14:48.062537823Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-20T23:14:48.096962724Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=34.424941ms policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-20T23:14:48.100753861Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-20T23:14:48.101039035Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=275.644µs policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-20T23:14:48.104414177Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-20T23:14:48.104710811Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=297.484µs kafka | [2024-02-20 23:15:18,373] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-20T23:14:48.108835342Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-02-20 23:15:18,373] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-02-20T23:14:48.109062055Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=225.053µs policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-20T23:14:48.112331776Z level=info msg="Executing migration" id="create folder table" policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-20T23:14:48.113055215Z level=info msg="Migration successfully executed" id="create folder table" duration=723.359µs kafka | [2024-02-20 23:15:18,374] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-20T23:14:48.116297276Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-02-20 23:15:18,380] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-20T23:14:48.117113696Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=816.28µs policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-20T23:14:48.121353749Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-02-20 23:15:18,381] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-20T23:14:48.12221956Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=865.501µs policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-02-20T23:14:48.125169367Z level=info msg="Executing migration" id="Update folder title length" policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-02-20T23:14:48.125201267Z level=info msg="Migration successfully executed" id="Update folder title length" duration=32.59µs policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-02-20T23:14:48.128221365Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-20T23:14:48.129800075Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.57663ms policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-20T23:14:48.135174822Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" policy-pap | ssl.provider = null grafana | logger=migrator t=2024-02-20T23:14:48.136811073Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.635841ms policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-20T23:14:48.143436125Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-02-20T23:14:48.144534279Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.097554ms policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-02-20T23:14:48.148775392Z level=info msg="Executing migration" id="Sync dashboard and folder table" policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-20T23:14:48.149480621Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=705.559µs policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-02-20T23:14:48.153733814Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-20T23:14:48.15417413Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=443.786µs policy-pap | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-02-20T23:14:48.157783945Z level=info msg="Executing migration" id="create anon_device table" policy-pap | transactional.id = null grafana | logger=migrator t=2024-02-20T23:14:48.159101202Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.316807ms policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-20T23:14:48.162533995Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-pap | grafana | logger=migrator t=2024-02-20T23:14:48.163699419Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.165174ms policy-pap | [2024-02-20T23:15:17.257+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-02-20T23:14:48.17092466Z level=info msg="Executing migration" id="add index anon_device.updated_at" policy-pap | [2024-02-20T23:15:17.276+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-02-20T23:14:48.172433588Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.509398ms policy-pap | [2024-02-20T23:15:17.276+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-20T23:14:48.176776783Z level=info msg="Executing migration" id="create signing_key table" policy-pap | [2024-02-20T23:15:17.276+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470917276 grafana | logger=migrator t=2024-02-20T23:14:48.177589833Z level=info msg="Migration successfully executed" id="create signing_key table" duration=812.92µs policy-pap | [2024-02-20T23:15:17.276+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3f30bb3f-5302-486b-8569-1b9775974abc, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-02-20T23:14:48.180742482Z level=info msg="Executing migration" id="add unique index signing_key.key_id" policy-pap | [2024-02-20T23:15:17.276+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b4ee44d6-4cfe-4b3e-a581-7682611298af, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-02-20T23:14:48.182236591Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.492839ms policy-pap | [2024-02-20T23:15:17.277+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-02-20T23:14:48.186602106Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-pap | acks = -1 grafana | logger=migrator t=2024-02-20T23:14:48.188233226Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.631ms policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-02-20T23:14:48.191450896Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-pap | batch.size = 16384 grafana | logger=migrator t=2024-02-20T23:14:48.19171308Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=262.354µs policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-02-20T23:14:48.194240401Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-pap | buffer.memory = 33554432 grafana | logger=migrator t=2024-02-20T23:14:48.204041004Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.799643ms policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-02-20T23:14:48.208357238Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" policy-pap | client.id = producer-2 grafana | logger=migrator t=2024-02-20T23:14:48.208835734Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=479.056µs policy-pap | compression.type = none grafana | logger=migrator t=2024-02-20T23:14:48.212048714Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-20T23:14:48.212908765Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=859.521µs policy-pap | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-02-20T23:14:48.216385559Z level=info msg="Executing migration" id="create sso_setting table" policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-02-20T23:14:48.217846827Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.460628ms policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-20T23:14:48.226335743Z level=info msg="Executing migration" id="copy kvstore migration status to each org" policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-20T23:14:48.227042422Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=707.049µs policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-02-20T23:14:48.240748324Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-02-20T23:14:48.241221459Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=473.796µs policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-02-20T23:14:48.244842055Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.188938759s policy-pap | max.request.size = 1048576 grafana | logger=sqlstore t=2024-02-20T23:14:48.256224077Z level=info msg="Created default admin" user=admin policy-pap | metadata.max.age.ms = 300000 grafana | logger=sqlstore t=2024-02-20T23:14:48.25641533Z level=info msg="Created default organization" grafana | logger=secrets t=2024-02-20T23:14:48.260769964Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-02-20 23:15:18,381] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=plugin.store t=2024-02-20T23:14:48.276665353Z level=info msg="Loading plugins..." kafka | [2024-02-20 23:15:18,381] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | metric.reporters = [] grafana | logger=local.finder t=2024-02-20T23:14:48.311627771Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled policy-pap | metrics.num.samples = 2 grafana | logger=plugin.store t=2024-02-20T23:14:48.311679921Z level=info msg="Plugins loaded" count=55 duration=35.015518ms policy-pap | metrics.recording.level = INFO grafana | logger=query_data t=2024-02-20T23:14:48.314046321Z level=info msg="Query Service initialization" policy-pap | metrics.sample.window.ms = 30000 grafana | logger=live.push_http t=2024-02-20T23:14:48.317235901Z level=info msg="Live Push Gateway initialization" policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=ngalert.migration t=2024-02-20T23:14:48.323296637Z level=info msg=Starting policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=ngalert.migration orgID=1 t=2024-02-20T23:14:48.324172358Z level=info msg="Migrating alerts for organisation" policy-pap | partitioner.class = null grafana | logger=ngalert.migration orgID=1 t=2024-02-20T23:14:48.324928247Z level=info msg="Alerts found to migrate" alerts=0 policy-pap | partitioner.ignore.keys = false grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-20T23:14:48.326897322Z level=info msg="Completed legacy migration" policy-pap | receive.buffer.bytes = 32768 grafana | logger=infra.usagestats.collector t=2024-02-20T23:14:48.416905133Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=provisioning.datasources t=2024-02-20T23:14:48.43021463Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz policy-pap | reconnect.backoff.ms = 50 grafana | logger=provisioning.alerting t=2024-02-20T23:14:48.446795269Z level=info msg="starting to provision alerting" policy-pap | request.timeout.ms = 30000 grafana | logger=provisioning.alerting t=2024-02-20T23:14:48.44681706Z level=info msg="finished to provision alerting" policy-pap | retries = 2147483647 grafana | logger=ngalert.state.manager t=2024-02-20T23:14:48.447172904Z level=info msg="Warming state cache for startup" policy-pap | retry.backoff.ms = 100 grafana | logger=grafanaStorageLogger t=2024-02-20T23:14:48.44766996Z level=info msg="Storage starting" policy-pap | sasl.client.callback.handler.class = null grafana | logger=http.server t=2024-02-20T23:14:48.452706504Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= policy-pap | sasl.jaas.config = null grafana | logger=grafana-apiserver t=2024-02-20T23:14:48.456624833Z level=info msg="Authentication is disabled" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=grafana-apiserver t=2024-02-20T23:14:48.459147814Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" kafka | [2024-02-20 23:15:18,381] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=ngalert.multiorg.alertmanager t=2024-02-20T23:14:48.459485179Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-02-20 23:15:18,389] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.kerberos.service.name = null grafana | logger=ngalert.state.manager t=2024-02-20T23:14:48.534300901Z level=info msg="State cache has been initialized" states=0 duration=87.124777ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=ngalert.scheduler t=2024-02-20T23:14:48.534370652Z level=info msg="Starting scheduler" tickInterval=10s policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=ticker t=2024-02-20T23:14:48.534434572Z level=info msg=starting first_tick=2024-02-20T23:14:50Z policy-pap | sasl.login.callback.handler.class = null grafana | logger=plugins.update.checker t=2024-02-20T23:14:48.555527118Z level=info msg="Update check succeeded" duration=108.453555ms policy-pap | sasl.login.class = null grafana | logger=grafana.update.checker t=2024-02-20T23:14:48.58430333Z level=info msg="Update check succeeded" duration=137.246498ms policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=sqlstore.transactions t=2024-02-20T23:14:48.636979303Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-pap | sasl.login.read.timeout.ms = null grafana | logger=infra.usagestats t=2024-02-20T23:15:20.459905996Z level=info msg="Usage stats are ready to report" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-20 23:15:18,389] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-02-20 23:15:18,390] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-02-20 23:15:18,390] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-20 23:15:18,390] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-20 23:15:18,397] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-20 23:15:18,398] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 kafka | [2024-02-20 23:15:18,398] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-20 23:15:18,398] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-20 23:15:18,398] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-20 23:15:18,407] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null kafka | [2024-02-20 23:15:18,407] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-20 23:15:18,407] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-20 23:15:18,407] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS kafka | [2024-02-20 23:15:18,407] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null kafka | [2024-02-20 23:15:18,414] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-02-20T23:15:17.277+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-02-20 23:15:18,415] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:17.281+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-02-20T23:15:17.281+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-02-20T23:15:17.281+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708470917281 kafka | [2024-02-20 23:15:18,415] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:17.281+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b4ee44d6-4cfe-4b3e-a581-7682611298af, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-02-20T23:15:17.281+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-02-20T23:15:17.281+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher kafka | [2024-02-20 23:15:18,415] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:17.283+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-02-20T23:15:17.289+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-02-20T23:15:17.292+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers kafka | [2024-02-20 23:15:18,416] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:17.292+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-02-20T23:15:17.292+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-02-20T23:15:17.292+00:00|INFO|TimerManager|Thread-10] timer manager state-change started kafka | [2024-02-20 23:15:18,423] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:17.295+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-02-20T23:15:17.296+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-02-20T23:15:17.298+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-02-20T23:15:17.299+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.719 seconds (process running for 11.403) kafka | [2024-02-20 23:15:18,424] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:17.764+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-02-20T23:15:17.765+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: On8LQcwTQAOf3IoV4hs6OA policy-pap | [2024-02-20T23:15:17.765+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: On8LQcwTQAOf3IoV4hs6OA kafka | [2024-02-20 23:15:18,424] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:17.768+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: On8LQcwTQAOf3IoV4hs6OA policy-pap | [2024-02-20T23:15:17.837+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:17.838+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Cluster ID: On8LQcwTQAOf3IoV4hs6OA kafka | [2024-02-20 23:15:18,424] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:17.885+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:17.886+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-pap | [2024-02-20T23:15:17.887+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 kafka | [2024-02-20 23:15:18,424] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:17.989+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-02-20T23:15:18.035+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-20 23:15:18,436] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,437] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:18.096+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.150+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.204+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-20 23:15:18,437] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:18.257+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.315+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-20 23:15:18,437] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:18.364+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.422+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.472+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-20 23:15:18,437] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:18.528+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.582+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.636+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.688+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-20 23:15:18,444] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:18.742+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-20T23:15:18.802+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-20 23:15:18,445] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:18.816+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-02-20T23:15:18.823+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] (Re-)joining group policy-pap | [2024-02-20T23:15:18.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Request joining group due to: need to re-join with the given member-id: consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3-89e762c3-47ee-455a-95ae-aac7e027bc16 kafka | [2024-02-20 23:15:18,446] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:18.854+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-02-20T23:15:18.854+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] (Re-)joining group policy-pap | [2024-02-20T23:15:18.856+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-02-20 23:15:18,446] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:18.858+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-02-20T23:15:18.866+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-c9f34d80-5d5c-42a3-8eb3-232c861f5c0f policy-pap | [2024-02-20T23:15:18.866+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-02-20 23:15:18,446] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:18.866+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-02-20T23:15:21.878+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-c9f34d80-5d5c-42a3-8eb3-232c861f5c0f', protocol='range'} kafka | [2024-02-20 23:15:18,465] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:21.877+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Successfully joined group with generation Generation{generationId=1, memberId='consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3-89e762c3-47ee-455a-95ae-aac7e027bc16', protocol='range'} policy-pap | [2024-02-20T23:15:21.885+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Finished assignment for group at generation 1: {consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3-89e762c3-47ee-455a-95ae-aac7e027bc16=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-02-20T23:15:21.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-c9f34d80-5d5c-42a3-8eb3-232c861f5c0f=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-02-20T23:15:21.917+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-c9f34d80-5d5c-42a3-8eb3-232c861f5c0f', protocol='range'} kafka | [2024-02-20 23:15:18,466] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:21.918+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-02-20T23:15:21.920+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Successfully synced group in generation Generation{generationId=1, memberId='consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3-89e762c3-47ee-455a-95ae-aac7e027bc16', protocol='range'} policy-pap | [2024-02-20T23:15:21.920+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-02-20T23:15:21.924+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-02-20T23:15:21.924+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-02-20T23:15:21.948+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-02-20T23:15:21.948+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-02-20T23:15:21.966+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3, groupId=364f6f57-838f-467b-8ccc-3ae2767c47b5] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-02-20T23:15:21.967+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-02-20T23:15:23.764+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2024-02-20T23:15:23.764+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-pap | [2024-02-20T23:15:23.766+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms policy-pap | [2024-02-20T23:15:38.746+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2024-02-20T23:15:38.746+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2eeee666-4d15-4cfd-8347-ff3503ae3470","timestampMs":1708470938710,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} policy-pap | [2024-02-20T23:15:38.747+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2eeee666-4d15-4cfd-8347-ff3503ae3470","timestampMs":1708470938710,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} policy-pap | [2024-02-20T23:15:38.757+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-02-20T23:15:38.859+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate starting policy-pap | [2024-02-20T23:15:38.859+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate starting listener policy-pap | [2024-02-20T23:15:38.859+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate starting timer policy-pap | [2024-02-20T23:15:38.860+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=244fccc0-f73f-49b9-b667-3414ddacd90b, expireMs=1708470968860] policy-pap | [2024-02-20T23:15:38.863+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=244fccc0-f73f-49b9-b667-3414ddacd90b, expireMs=1708470968860] policy-pap | [2024-02-20T23:15:38.863+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate starting enqueue kafka | [2024-02-20 23:15:18,466] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:38.863+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate started policy-pap | [2024-02-20T23:15:38.864+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"244fccc0-f73f-49b9-b667-3414ddacd90b","timestampMs":1708470938843,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.896+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-20 23:15:18,466] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"244fccc0-f73f-49b9-b667-3414ddacd90b","timestampMs":1708470938843,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.896+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-02-20 23:15:18,467] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:38.900+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"244fccc0-f73f-49b9-b667-3414ddacd90b","timestampMs":1708470938843,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.901+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-02-20 23:15:18,476] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:38.917+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"daa43d58-3a45-4c97-aacd-3f032a1af7e7","timestampMs":1708470938907,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} kafka | [2024-02-20 23:15:18,477] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:38.920+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"daa43d58-3a45-4c97-aacd-3f032a1af7e7","timestampMs":1708470938907,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup"} policy-pap | [2024-02-20T23:15:38.921+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-02-20T23:15:38.921+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-20 23:15:18,477] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,477] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"244fccc0-f73f-49b9-b667-3414ddacd90b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"69cac102-f9ed-4ecb-9ec0-f0d4e09326b1","timestampMs":1708470938907,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.939+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"244fccc0-f73f-49b9-b667-3414ddacd90b","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"69cac102-f9ed-4ecb-9ec0-f0d4e09326b1","timestampMs":1708470938907,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.939+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopping kafka | [2024-02-20 23:15:18,477] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:38.939+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 244fccc0-f73f-49b9-b667-3414ddacd90b policy-pap | [2024-02-20T23:15:38.940+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopping enqueue policy-pap | [2024-02-20T23:15:38.940+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopping timer policy-pap | [2024-02-20T23:15:38.940+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=244fccc0-f73f-49b9-b667-3414ddacd90b, expireMs=1708470968860] policy-pap | [2024-02-20T23:15:38.940+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopping listener policy-pap | [2024-02-20T23:15:38.940+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopped kafka | [2024-02-20 23:15:18,486] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:38.945+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate successful policy-pap | [2024-02-20T23:15:38.945+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 start publishing next request policy-pap | [2024-02-20T23:15:38.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange starting kafka | [2024-02-20 23:15:18,486] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:38.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange starting listener policy-pap | [2024-02-20T23:15:38.945+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange starting timer policy-pap | [2024-02-20T23:15:38.946+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=883bd020-d766-4b04-85c5-046e0e372bb6, expireMs=1708470968946] kafka | [2024-02-20 23:15:18,486] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:38.946+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange starting enqueue policy-pap | [2024-02-20T23:15:38.946+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange started policy-pap | [2024-02-20T23:15:38.946+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=883bd020-d766-4b04-85c5-046e0e372bb6, expireMs=1708470968946] kafka | [2024-02-20 23:15:18,486] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:38.947+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"883bd020-d766-4b04-85c5-046e0e372bb6","timestampMs":1708470938844,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-20 23:15:18,486] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:38.958+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"883bd020-d766-4b04-85c5-046e0e372bb6","timestampMs":1708470938844,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.959+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-02-20T23:15:38.970+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"883bd020-d766-4b04-85c5-046e0e372bb6","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"dde314cf-626e-4fce-8471-18e13ff86a82","timestampMs":1708470938960,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-20 23:15:18,498] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:38.971+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 883bd020-d766-4b04-85c5-046e0e372bb6 policy-pap | [2024-02-20T23:15:38.978+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"883bd020-d766-4b04-85c5-046e0e372bb6","timestampMs":1708470938844,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.978+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-02-20T23:15:38.983+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"883bd020-d766-4b04-85c5-046e0e372bb6","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"dde314cf-626e-4fce-8471-18e13ff86a82","timestampMs":1708470938960,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-20 23:15:18,498] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:38.983+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange stopping policy-pap | [2024-02-20T23:15:38.984+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange stopping enqueue policy-pap | [2024-02-20T23:15:38.984+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange stopping timer policy-pap | [2024-02-20T23:15:38.984+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=883bd020-d766-4b04-85c5-046e0e372bb6, expireMs=1708470968946] policy-pap | [2024-02-20T23:15:38.984+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange stopping listener policy-pap | [2024-02-20T23:15:38.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange stopped policy-pap | [2024-02-20T23:15:38.985+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpStateChange successful policy-pap | [2024-02-20T23:15:38.985+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 start publishing next request policy-pap | [2024-02-20T23:15:38.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate starting policy-pap | [2024-02-20T23:15:38.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate starting listener kafka | [2024-02-20 23:15:18,499] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:38.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate starting timer policy-pap | [2024-02-20T23:15:38.985+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=fc3737ad-ad98-4ffa-9216-698c7518a46d, expireMs=1708470968985] policy-pap | [2024-02-20T23:15:38.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate starting enqueue policy-pap | [2024-02-20T23:15:38.986+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fc3737ad-ad98-4ffa-9216-698c7518a46d","timestampMs":1708470938971,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate started policy-pap | [2024-02-20T23:15:38.993+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fc3737ad-ad98-4ffa-9216-698c7518a46d","timestampMs":1708470938971,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:38.994+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-02-20T23:15:39.002+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-8fedec74-2ca5-4ce1-9cbe-641163125da1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"fc3737ad-ad98-4ffa-9216-698c7518a46d","timestampMs":1708470938971,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:39.002+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-02-20 23:15:18,499] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:39.004+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"fc3737ad-ad98-4ffa-9216-698c7518a46d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"3657ce3e-434a-4f8c-8e3a-fde3720bedeb","timestampMs":1708470938996,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:39.005+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id fc3737ad-ad98-4ffa-9216-698c7518a46d policy-pap | [2024-02-20T23:15:39.006+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"fc3737ad-ad98-4ffa-9216-698c7518a46d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"3657ce3e-434a-4f8c-8e3a-fde3720bedeb","timestampMs":1708470938996,"name":"apex-615c03f3-364d-4564-9b35-bc11510204d0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-20T23:15:39.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopping kafka | [2024-02-20 23:15:18,499] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:39.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopping enqueue policy-pap | [2024-02-20T23:15:39.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopping timer kafka | [2024-02-20 23:15:18,506] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:39.006+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=fc3737ad-ad98-4ffa-9216-698c7518a46d, expireMs=1708470968985] policy-pap | [2024-02-20T23:15:39.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopping listener policy-pap | [2024-02-20T23:15:39.006+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate stopped kafka | [2024-02-20 23:15:18,507] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:39.012+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 PdpUpdate successful policy-pap | [2024-02-20T23:15:39.012+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-615c03f3-364d-4564-9b35-bc11510204d0 has no more requests policy-pap | [2024-02-20T23:15:44.403+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-02-20 23:15:18,507] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:44.410+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-02-20T23:15:44.800+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup policy-pap | [2024-02-20T23:15:45.378+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup kafka | [2024-02-20 23:15:18,507] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:45.379+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup policy-pap | [2024-02-20T23:15:45.960+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-02-20T23:15:46.205+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-02-20 23:15:18,507] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:46.303+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-02-20T23:15:46.303+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup policy-pap | [2024-02-20T23:15:46.304+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup kafka | [2024-02-20 23:15:18,514] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:15:46.321+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-20T23:15:46Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-20T23:15:46Z, user=policyadmin)] policy-pap | [2024-02-20T23:15:47.084+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2024-02-20T23:15:47.086+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-02-20 23:15:18,515] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:15:47.086+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-02-20T23:15:47.086+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup policy-pap | [2024-02-20T23:15:47.087+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup kafka | [2024-02-20 23:15:18,515] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:47.098+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-20T23:15:47Z, user=policyadmin)] policy-pap | [2024-02-20T23:15:47.480+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group defaultGroup policy-pap | [2024-02-20T23:15:47.480+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup kafka | [2024-02-20 23:15:18,515] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-20T23:15:47.480+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2024-02-20T23:15:47.480+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-02-20 23:15:18,515] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-02-20T23:15:47.480+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup policy-pap | [2024-02-20T23:15:47.480+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup policy-pap | [2024-02-20T23:15:47.491+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-20T23:15:47Z, user=policyadmin)] kafka | [2024-02-20 23:15:18,521] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-20T23:16:08.075+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-pap | [2024-02-20T23:16:08.077+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-pap | [2024-02-20T23:16:08.860+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=244fccc0-f73f-49b9-b667-3414ddacd90b, expireMs=1708470968860] kafka | [2024-02-20 23:15:18,522] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-20T23:16:08.947+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=883bd020-d766-4b04-85c5-046e0e372bb6, expireMs=1708470968946] kafka | [2024-02-20 23:15:18,522] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,522] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,522] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,530] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,530] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,530] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,531] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,531] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,537] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,537] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,537] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,537] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,537] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,544] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,544] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,544] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,544] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,545] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,551] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,552] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,552] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,552] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,552] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,562] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,562] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,562] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,563] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,563] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,575] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,576] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,576] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,576] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,576] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,586] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,587] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,587] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,587] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,587] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,597] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,597] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,598] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,598] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,598] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,604] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,604] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,604] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,605] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,605] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(BfQFazPiQayoGUpac3B4xw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,611] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,612] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,612] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,612] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,612] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,621] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,622] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,622] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,622] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,622] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,630] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,631] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,631] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,631] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,631] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,641] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,643] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,643] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,644] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,644] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,651] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,653] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,653] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,653] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,653] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,661] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,662] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,662] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,662] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,662] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,671] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,671] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,672] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,672] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,672] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,677] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,678] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,678] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,678] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,678] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,685] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,686] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,686] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,686] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,686] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,697] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,698] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,698] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,698] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,698] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,706] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,706] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,706] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,707] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,707] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,717] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,718] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,718] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,718] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,718] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,727] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,727] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,727] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,727] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,727] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,733] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,734] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,734] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,734] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,734] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,742] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,745] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,745] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,745] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,745] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,755] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-20 23:15:18,756] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-20 23:15:18,756] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,756] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-20 23:15:18,756] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(IOZYBLsmQm6YXR5s4m9LhQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-02-20 23:15:18,760] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-02-20 23:15:18,761] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-02-20 23:15:18,769] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,771] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,772] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,772] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,773] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,773] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,774] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,774] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,775] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,775] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,776] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,776] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,777] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,779] INFO [Broker id=1] Finished LeaderAndIsr request in 653ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-02-20 23:15:18,780] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,781] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,782] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,783] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,784] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,784] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,784] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,785] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,786] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,786] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=IOZYBLsmQm6YXR5s4m9LhQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=BfQFazPiQayoGUpac3B4xw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-20 23:15:18,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,787] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,788] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,790] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 15 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,790] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,790] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,791] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,792] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,792] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-20 23:15:18,796] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,796] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,796] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,796] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,796] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,796] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,796] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,797] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,798] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,799] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-20 23:15:18,802] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-20 23:15:18,849] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 364f6f57-838f-467b-8ccc-3ae2767c47b5 in Empty state. Created a new member id consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3-89e762c3-47ee-455a-95ae-aac7e027bc16 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,862] INFO [GroupCoordinator 1]: Preparing to rebalance group 364f6f57-838f-467b-8ccc-3ae2767c47b5 in state PreparingRebalance with old generation 0 (__consumer_offsets-16) (reason: Adding new member consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3-89e762c3-47ee-455a-95ae-aac7e027bc16 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,864] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-c9f34d80-5d5c-42a3-8eb3-232c861f5c0f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:18,870] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-c9f34d80-5d5c-42a3-8eb3-232c861f5c0f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:19,149] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b20135be-18a4-4de4-8569-3ebb4824ad25 in Empty state. Created a new member id consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2-37c4ee8e-f029-4055-ab88-8e8d67026094 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:19,152] INFO [GroupCoordinator 1]: Preparing to rebalance group b20135be-18a4-4de4-8569-3ebb4824ad25 in state PreparingRebalance with old generation 0 (__consumer_offsets-47) (reason: Adding new member consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2-37c4ee8e-f029-4055-ab88-8e8d67026094 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:21,872] INFO [GroupCoordinator 1]: Stabilized group 364f6f57-838f-467b-8ccc-3ae2767c47b5 generation 1 (__consumer_offsets-16) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:21,876] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:21,902] INFO [GroupCoordinator 1]: Assignment received from leader consumer-364f6f57-838f-467b-8ccc-3ae2767c47b5-3-89e762c3-47ee-455a-95ae-aac7e027bc16 for group 364f6f57-838f-467b-8ccc-3ae2767c47b5 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:21,902] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-c9f34d80-5d5c-42a3-8eb3-232c861f5c0f for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:22,153] INFO [GroupCoordinator 1]: Stabilized group b20135be-18a4-4de4-8569-3ebb4824ad25 generation 1 (__consumer_offsets-47) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-20 23:15:22,169] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b20135be-18a4-4de4-8569-3ebb4824ad25-2-37c4ee8e-f029-4055-ab88-8e8d67026094 for group b20135be-18a4-4de4-8569-3ebb4824ad25 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping kafka ... Stopping grafana ... Stopping policy-api ... Stopping compose_zookeeper_1 ... Stopping prometheus ... Stopping mariadb ... Stopping simulator ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing kafka ... Removing grafana ... Removing policy-api ... Removing policy-db-migrator ... Removing compose_zookeeper_1 ... Removing prometheus ... Removing mariadb ... Removing simulator ... Removing simulator ... done Removing policy-api ... done Removing policy-pap ... done Removing policy-apex-pdp ... done Removing prometheus ... done Removing grafana ... done Removing policy-db-migrator ... done Removing mariadb ... done Removing kafka ... done Removing compose_zookeeper_1 ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.pn8iy6Kpoj ]] + rsync -av /tmp/tmp.pn8iy6Kpoj/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 910,165 bytes received 95 bytes 1,820,520.00 bytes/sec total size is 909,619 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2158 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5740370706368933121.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8409639785588404601.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins667165895451484321.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-G64c from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-G64c/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10881531516566857659.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config9219002222567253529tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15596944639526165955.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12213840763410364258.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-G64c from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-G64c/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2263222218327225249.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11298487158154559759.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-G64c from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-G64c/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4615423051504693562.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-G64c from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.5.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-G64c/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1584 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-7144 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 862 25306 0 5998 30849 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:03:de:44 brd ff:ff:ff:ff:ff:ff inet 10.30.106.16/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85909sec preferred_lft 85909sec inet6 fe80::f816:3eff:fe03:de44/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:e1:39:f7:1a brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-7144) 02/20/24 _x86_64_ (8 CPU) 23:10:23 LINUX RESTART (8 CPU) 23:11:02 tps rtps wtps bread/s bwrtn/s 23:12:01 102.22 32.79 69.43 1679.16 14101.52 23:13:01 99.70 13.83 85.87 1122.08 18709.28 23:14:01 118.86 9.13 109.73 1619.86 47204.00 23:15:01 423.78 12.40 411.38 790.73 98698.07 23:16:01 25.18 0.35 24.83 33.73 11362.67 23:17:01 8.17 0.03 8.13 0.27 7465.21 23:18:01 59.51 1.22 58.29 106.92 9682.32 Average: 119.67 9.91 109.76 762.52 29639.92 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 30126752 31719132 2812468 8.54 68908 1834124 1418596 4.17 850040 1670560 156948 23:13:01 29829504 31717212 3109716 9.44 85680 2097304 1391388 4.09 862076 1923496 145972 23:14:01 27069416 31666108 5869804 17.82 129584 4648292 1416496 4.17 1010984 4385468 1529036 23:15:01 24784824 30494464 8154396 24.76 153388 5674156 7578360 22.30 2324720 5241228 272 23:16:01 23452068 29279384 9487152 28.80 155256 5785412 9105716 26.79 3588792 5295872 260 23:17:01 23401852 29229972 9537368 28.95 155436 5785984 9106828 26.79 3636856 5295844 256 23:18:01 25953768 31622148 6985452 21.21 157840 5643444 1506692 4.43 1292360 5157036 44704 Average: 26374026 30818346 6565194 19.93 129442 4495531 4503439 13.25 1937975 4138501 268207 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 ens3 54.61 36.16 926.98 7.34 0.00 0.00 0.00 0.00 23:12:01 lo 1.56 1.56 0.17 0.17 0.00 0.00 0.00 0.00 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 ens3 45.96 34.36 722.90 6.33 0.00 0.00 0.00 0.00 23:13:01 lo 1.27 1.27 0.13 0.13 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 br-b8f0610e9025 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 ens3 805.93 426.56 19208.70 31.75 0.00 0.00 0.00 0.00 23:14:01 lo 9.00 9.00 0.89 0.89 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:15:01 veth34b6d7f 30.81 39.19 3.04 4.60 0.00 0.00 0.00 0.00 23:15:01 vethde13685 1.27 1.78 0.15 0.17 0.00 0.00 0.00 0.00 23:15:01 veth7733915 0.43 0.70 0.05 0.30 0.00 0.00 0.00 0.00 23:15:01 br-b8f0610e9025 0.65 0.53 0.05 0.29 0.00 0.00 0.00 0.00 23:16:01 veth34b6d7f 75.19 87.90 74.03 26.75 0.00 0.00 0.00 0.01 23:16:01 vethde13685 4.00 4.78 0.68 0.75 0.00 0.00 0.00 0.00 23:16:01 veth7733915 0.13 0.22 0.01 0.01 0.00 0.00 0.00 0.00 23:16:01 br-b8f0610e9025 1.78 2.05 1.75 1.69 0.00 0.00 0.00 0.00 23:17:01 veth34b6d7f 1.50 1.72 0.54 0.39 0.00 0.00 0.00 0.00 23:17:01 vethde13685 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 23:17:01 veth7733915 0.15 0.07 0.01 0.00 0.00 0.00 0.00 0.00 23:17:01 br-b8f0610e9025 0.78 0.77 0.10 0.07 0.00 0.00 0.00 0.00 23:18:01 ens3 1619.18 943.33 32749.30 151.53 0.00 0.00 0.00 0.00 23:18:01 lo 34.16 34.16 6.14 6.14 0.00 0.00 0.00 0.00 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 189.14 105.33 4579.69 14.00 0.00 0.00 0.00 0.00 Average: lo 4.33 4.33 0.83 0.83 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-7144) 02/20/24 _x86_64_ (8 CPU) 23:10:23 LINUX RESTART (8 CPU) 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 9.32 0.00 0.67 2.58 0.03 87.40 23:12:01 0 2.43 0.00 0.31 0.37 0.02 96.87 23:12:01 1 0.68 0.00 0.22 0.31 0.02 98.78 23:12:01 2 4.62 0.00 0.73 4.52 0.03 90.10 23:12:01 3 5.11 0.00 0.59 9.40 0.02 84.89 23:12:01 4 30.00 0.00 1.43 3.07 0.03 65.46 23:12:01 5 18.24 0.00 1.07 0.75 0.03 79.90 23:12:01 6 9.66 0.00 0.58 1.97 0.07 87.71 23:12:01 7 3.91 0.00 0.39 0.27 0.03 95.39 23:13:01 all 8.91 0.00 0.68 2.82 0.03 87.56 23:13:01 0 4.48 0.00 0.38 0.43 0.02 94.68 23:13:01 1 2.15 0.00 0.20 0.20 0.00 97.45 23:13:01 2 5.56 0.00 0.48 0.85 0.02 93.09 23:13:01 3 21.33 0.00 1.34 12.80 0.05 64.48 23:13:01 4 4.63 0.00 0.40 0.55 0.02 94.40 23:13:01 5 15.47 0.00 1.07 5.04 0.08 78.34 23:13:01 6 1.52 0.00 0.47 2.39 0.02 95.61 23:13:01 7 16.23 0.00 1.09 0.28 0.05 82.35 23:14:01 all 9.69 0.00 3.97 7.11 0.07 79.15 23:14:01 0 10.09 0.00 3.37 1.20 0.07 85.28 23:14:01 1 12.48 0.00 4.13 0.46 0.08 82.85 23:14:01 2 10.44 0.00 4.90 22.20 0.08 62.38 23:14:01 3 9.54 0.00 4.83 3.32 0.05 82.25 23:14:01 4 8.74 0.00 4.13 0.98 0.07 86.08 23:14:01 5 6.97 0.00 3.05 5.53 0.05 84.40 23:14:01 6 8.51 0.00 3.44 9.21 0.08 78.76 23:14:01 7 10.77 0.00 3.95 14.11 0.09 71.08 23:15:01 all 15.37 0.00 3.70 8.31 0.07 72.55 23:15:01 0 8.55 0.00 2.81 4.30 0.07 84.27 23:15:01 1 16.68 0.00 3.38 1.79 0.07 78.09 23:15:01 2 16.39 0.00 4.40 20.42 0.08 58.70 23:15:01 3 16.87 0.00 4.97 24.13 0.08 53.95 23:15:01 4 21.36 0.00 3.71 1.14 0.08 73.71 23:15:01 5 11.00 0.00 3.48 0.94 0.07 84.52 23:15:01 6 20.09 0.00 4.10 11.43 0.07 64.32 23:15:01 7 12.05 0.00 2.82 2.48 0.05 82.60 23:16:01 all 20.35 0.00 1.91 0.54 0.07 77.14 23:16:01 0 26.06 0.00 2.39 0.00 0.08 71.47 23:16:01 1 15.51 0.00 1.90 0.89 0.10 81.61 23:16:01 2 24.61 0.00 2.40 0.00 0.07 72.92 23:16:01 3 22.43 0.00 1.94 3.07 0.07 72.49 23:16:01 4 14.44 0.00 1.15 0.02 0.05 84.34 23:16:01 5 23.14 0.00 1.92 0.00 0.05 74.89 23:16:01 6 20.90 0.00 1.90 0.23 0.08 76.88 23:16:01 7 15.72 0.00 1.61 0.03 0.07 82.57 23:17:01 all 1.34 0.00 0.16 0.43 0.05 98.02 23:17:01 0 0.92 0.00 0.18 0.00 0.05 98.85 23:17:01 1 3.12 0.00 0.20 0.00 0.05 96.63 23:17:01 2 1.00 0.00 0.17 0.03 0.03 98.76 23:17:01 3 1.22 0.00 0.15 3.27 0.10 95.26 23:17:01 4 1.15 0.00 0.15 0.00 0.03 98.67 23:17:01 5 1.20 0.00 0.18 0.00 0.02 98.60 23:17:01 6 1.39 0.00 0.20 0.08 0.05 98.28 23:17:01 7 0.67 0.00 0.10 0.00 0.05 99.18 23:18:01 all 4.14 0.00 0.71 0.67 0.04 94.43 23:18:01 0 17.09 0.00 1.28 0.33 0.05 81.24 23:18:01 1 2.95 0.00 0.61 0.13 0.03 96.28 23:18:01 2 1.85 0.00 0.70 0.70 0.05 96.69 23:18:01 3 1.42 0.00 0.65 3.94 0.05 93.94 23:18:01 4 1.39 0.00 0.50 0.02 0.03 98.06 23:18:01 5 2.42 0.00 0.75 0.12 0.05 96.66 23:18:01 6 2.30 0.00 0.63 0.13 0.05 96.88 23:18:01 7 3.72 0.00 0.58 0.05 0.03 95.61 Average: all 9.86 0.00 1.68 3.20 0.05 85.21 Average: 0 9.97 0.00 1.53 0.95 0.05 87.51 Average: 1 7.62 0.00 1.51 0.54 0.05 90.29 Average: 2 9.20 0.00 1.96 6.92 0.05 81.86 Average: 3 11.14 0.00 2.06 8.55 0.06 78.20 Average: 4 11.61 0.00 1.63 0.82 0.05 85.89 Average: 5 11.19 0.00 1.65 1.76 0.05 85.35 Average: 6 9.18 0.00 1.61 3.62 0.06 85.53 Average: 7 9.01 0.00 1.50 2.43 0.05 87.00