Started by upstream project "policy-docker-master-merge-java" build number 355 originally caused by: Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137813 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-36634 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-KuEwlt6zECAf/agent.2074 SSH_AGENT_PID=2076 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_10666932279683547356.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_10666932279683547356.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8fadfb9667186910af1b9b6c31b9bb673057f729 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8fadfb9667186910af1b9b6c31b9bb673057f729 # timeout=30 Commit message: "Add migration in integration tests" > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7401768939701260965.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-uSWi lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-uSWi/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-uSWi/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.3.0 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.95 botocore==1.34.95 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.14.0 future==1.0.0 gitdb==4.0.11 GitPython==3.1.43 google-auth==2.29.0 httplib2==0.22.0 identify==2.5.36 idna==3.7 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.22.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.2.1 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.1.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.1 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.1 prettytable==3.10.0 pyasn1==0.6.0 pyasn1_modules==0.4.0 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.3.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.6.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.35.0 requests==2.31.0 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.11.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.26.1 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins7684027883181992763.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins17671358011865591260.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.4nQgRfHSjc ++ echo ROBOT_VENV=/tmp/tmp.4nQgRfHSjc +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.4nQgRfHSjc ++ source /tmp/tmp.4nQgRfHSjc/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.4nQgRfHSjc +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.4nQgRfHSjc/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.4nQgRfHSjc) ' '!=' x ']' +++ PS1='(tmp.4nQgRfHSjc) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.2.1 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.4nQgRfHSjc/src/onap ++ rm -rf /tmp/tmp.4nQgRfHSjc/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.7 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.2.1 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.4nQgRfHSjc/bin/activate + '[' -z /tmp/tmp.4nQgRfHSjc/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.4nQgRfHSjc/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.4nQgRfHSjc ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.4nQgRfHSjc/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.4nQgRfHSjc) ' ++ '[' 'x(tmp.4nQgRfHSjc) ' '!=' x ']' ++ PS1='(tmp.4nQgRfHSjc) (tmp.4nQgRfHSjc) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.LZkIHxV7A6 + cd /tmp/tmp.LZkIHxV7A6 + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.3-SNAPSHOT)... 3.1.3-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:f41ae0e698a7eee4268ba3d29c141e50ab86dbca0876f787d3d80e16d6bffd9e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.3-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.3-SNAPSHOT)... 3.1.3-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:8a791064871fd335678bfa3970f82e9e75f070298b752b28924799c4b76ff4b1 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.3-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT)... 3.1.3-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:7fad0e07e4ad14d7b1ec6aec34f8583031a00f072037db0e6764795a9c95f7fd Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT)... 3.1.3-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:7f3b58c4f9b75937b65a0c67c12bb88aa2c134f077126cfa8a21b501b6bc004c Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT)... 3.1.3-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:8770653266299381ba06ecf1ac20de5cc32cd747d987933c80da099704d6db0f Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT Creating zookeeper ... Creating simulator ... Creating prometheus ... Creating mariadb ... Creating zookeeper ... done Creating kafka ... Creating kafka ... done Creating mariadb ... done Creating policy-db-migrator ... Creating prometheus ... done Creating grafana ... Creating simulator ... done Creating grafana ... done Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds policy-api Up 12 seconds grafana Up 14 seconds kafka Up 18 seconds mariadb Up 17 seconds prometheus Up 16 seconds zookeeper Up 19 seconds simulator Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds policy-api Up 17 seconds grafana Up 19 seconds kafka Up 23 seconds mariadb Up 22 seconds prometheus Up 21 seconds zookeeper Up 24 seconds simulator Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds policy-api Up 22 seconds grafana Up 24 seconds kafka Up 28 seconds mariadb Up 27 seconds prometheus Up 26 seconds zookeeper Up 29 seconds simulator Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds policy-api Up 27 seconds grafana Up 29 seconds kafka Up 33 seconds mariadb Up 32 seconds prometheus Up 31 seconds zookeeper Up 34 seconds simulator Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 32 seconds grafana Up 34 seconds kafka Up 39 seconds mariadb Up 37 seconds prometheus Up 36 seconds zookeeper Up 39 seconds simulator Up 35 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 08:51:40 up 4 min, 0 users, load average: 3.50, 1.49, 0.59 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 12.6 us, 2.6 sy, 0.0 ni, 80.4 id, 4.2 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' total used free shared buff/cache available Mem: 31G 2.6G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 32 seconds grafana Up 34 seconds kafka Up 39 seconds mariadb Up 38 seconds prometheus Up 37 seconds zookeeper Up 40 seconds simulator Up 36 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 5c926d350c18 policy-apex-pdp 154.94% 180.3MiB / 31.41GiB 0.56% 6.96kB / 6.75kB 0B / 0B 48 d8dbc83094dd policy-pap 103.67% 567.6MiB / 31.41GiB 1.76% 32.2kB / 60.9kB 0B / 149MB 63 73683836384d policy-api 0.12% 463.5MiB / 31.41GiB 1.44% 989kB / 673kB 0B / 0B 54 f0b6e98a5161 grafana 0.04% 59.23MiB / 31.41GiB 0.18% 18.9kB / 3.31kB 0B / 24.9MB 16 374f113e1e9c kafka 4.46% 356MiB / 31.41GiB 1.11% 70.2kB / 72.7kB 12.3kB / 508kB 83 1c9af3824a00 mariadb 0.02% 102.1MiB / 31.41GiB 0.32% 934kB / 1.18MB 11MB / 68.6MB 36 4467c1295dc9 prometheus 0.34% 19.76MiB / 31.41GiB 0.06% 56.3kB / 1.87kB 0B / 0B 12 8f0f1f14ae74 zookeeper 0.10% 96.93MiB / 31.41GiB 0.30% 56.6kB / 50kB 229kB / 418kB 60 d598961bd92a simulator 0.07% 122.3MiB / 31.41GiB 0.38% 1.27kB / 0B 0B / 0B 76 + echo + cd /tmp/tmp.LZkIHxV7A6 + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | DEPLOYMENT != UNDEPLOYMENT ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | FAIL | 22 tests, 21 passed, 1 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | FAIL | 30 tests, 29 passed, 1 failed ============================================================================== Output: /tmp/tmp.LZkIHxV7A6/output.xml Log: /tmp/tmp.LZkIHxV7A6/log.html Report: /tmp/tmp.LZkIHxV7A6/report.html + RESULT=1 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 1' RESULT: 1 + exit 1 + on_exit + rc=1 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 08:53:29 up 6 min, 0 users, load average: 0.65, 1.09, 0.54 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.4 us, 2.0 sy, 0.0 ni, 84.3 id, 3.2 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes zookeeper Up 2 minutes simulator Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 5c926d350c18 policy-apex-pdp 0.44% 181.5MiB / 31.41GiB 0.56% 55.5kB / 89.7kB 0B / 0B 52 d8dbc83094dd policy-pap 0.93% 530.7MiB / 31.41GiB 1.65% 2.47MB / 1.03MB 0B / 149MB 67 73683836384d policy-api 0.09% 465.4MiB / 31.41GiB 1.45% 2.45MB / 1.09MB 0B / 0B 55 f0b6e98a5161 grafana 0.06% 64.63MiB / 31.41GiB 0.20% 19.8kB / 4.34kB 0B / 24.9MB 16 374f113e1e9c kafka 1.27% 392.6MiB / 31.41GiB 1.22% 237kB / 213kB 12.3kB / 606kB 85 1c9af3824a00 mariadb 0.01% 103.3MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 68.8MB 28 4467c1295dc9 prometheus 0.00% 24.96MiB / 31.41GiB 0.08% 167kB / 10.9kB 0B / 0B 12 8f0f1f14ae74 zookeeper 0.09% 96.95MiB / 31.41GiB 0.30% 59.5kB / 51.5kB 229kB / 418kB 60 d598961bd92a simulator 0.09% 122.5MiB / 31.41GiB 0.38% 1.58kB / 0B 0B / 0B 78 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, policy-api, grafana, policy-db-migrator, kafka, mariadb, prometheus, zookeeper, simulator kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-05-01 08:51:05,053] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,053] INFO Client environment:host.name=374f113e1e9c (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,053] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,053] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,053] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,053] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,053] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,053] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,054] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,057] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,061] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-05-01 08:51:05,066] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-05-01 08:51:05,074] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-01 08:51:05,099] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-01 08:51:05,100] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-01 08:51:05,112] INFO Socket connection established, initiating session, client: /172.17.0.6:52340, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) mariadb | 2024-05-01 08:51:02+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-05-01 08:51:02+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-05-01 08:51:02+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-05-01 08:51:02+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-05-01 8:51:02 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-05-01 8:51:02 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-05-01 8:51:02 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-05-01 08:51:04+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-05-01 08:51:04+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-05-01 08:51:04+00:00 [Note] [Entrypoint]: Waiting for server startup kafka | [2024-05-01 08:51:05,145] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003d5940000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-01 08:51:05,276] INFO Session: 0x1000003d5940000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:05,276] INFO EventThread shut down for session: 0x1000003d5940000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-05-01 08:51:06,041] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-05-01 08:51:06,404] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-05-01 08:51:06,469] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-05-01 08:51:06,470] INFO starting (kafka.server.KafkaServer) kafka | [2024-05-01 08:51:06,470] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-05-01 08:51:06,481] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-05-01 08:51:06,485] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,485] INFO Client environment:host.name=374f113e1e9c (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,485] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,485] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,485] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,485] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,485] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) mariadb | 2024-05-01 8:51:04 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-05-01 8:51:04 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-05-01 8:51:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-05-01 8:51:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-05-01 8:51:04 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-05-01 8:51:04 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-05-01 8:51:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-05-01 8:51:04 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-05-01 8:51:04 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-05-01 8:51:04 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-05-01 08:51:05+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-05-01 08:51:07+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-05-01 08:51:07+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-05-01 08:51:07+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-05-01 08:51:07+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | grafana | logger=settings t=2024-05-01T08:51:05.69333758Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-05-01T08:51:05Z grafana | logger=settings t=2024-05-01T08:51:05.693607595Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-05-01T08:51:05.693619006Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-05-01T08:51:05.693623806Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-05-01T08:51:05.693628166Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-05-01T08:51:05.693631006Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-05-01T08:51:05.693633986Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-05-01T08:51:05.693637867Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-05-01T08:51:05.693641167Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-05-01T08:51:05.693644797Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-05-01T08:51:05.693648187Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-05-01T08:51:05.693652737Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-05-01T08:51:05.693656118Z level=info msg=Target target=[all] grafana | logger=settings t=2024-05-01T08:51:05.693669938Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-05-01T08:51:05.693673759Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-05-01T08:51:05.693677619Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-05-01T08:51:05.693682299Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-05-01T08:51:05.69369225Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-05-01T08:51:05.69369561Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-05-01T08:51:05.694013957Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-05-01T08:51:05.694034758Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-05-01T08:51:05.694744129Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-05-01T08:51:05.697069867Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-05-01T08:51:05.697866241Z level=info msg="Migration successfully executed" id="create migration_log table" duration=796.854µs grafana | logger=migrator t=2024-05-01T08:51:05.701580877Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-05-01T08:51:05.702135498Z level=info msg="Migration successfully executed" id="create user table" duration=554.781µs grafana | logger=migrator t=2024-05-01T08:51:05.704637686Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-05-01T08:51:05.705136424Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=498.558µs grafana | logger=migrator t=2024-05-01T08:51:05.71047592Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-05-01T08:51:05.711157547Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=681.407µs grafana | logger=migrator t=2024-05-01T08:51:05.714064849Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-05-01T08:51:05.714675983Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=611.085µs grafana | logger=migrator t=2024-05-01T08:51:05.717472128Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-05-01T08:51:05.718068111Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=596.283µs grafana | logger=migrator t=2024-05-01T08:51:05.722811804Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-05-01T08:51:05.725031337Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.221832ms grafana | logger=migrator t=2024-05-01T08:51:05.728466068Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-05-01T08:51:05.729270842Z level=info msg="Migration successfully executed" id="create user table v2" duration=804.534µs grafana | logger=migrator t=2024-05-01T08:51:05.732100179Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-05-01T08:51:05.732763225Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=662.716µs grafana | logger=migrator t=2024-05-01T08:51:05.735767632Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-05-01T08:51:05.736422508Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=654.496µs grafana | logger=migrator t=2024-05-01T08:51:05.741641617Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-05-01T08:51:05.742007127Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=365.28µs grafana | logger=migrator t=2024-05-01T08:51:05.744858885Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-05-01T08:51:05.745332823Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=473.947µs grafana | logger=migrator t=2024-05-01T08:51:05.748036732Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-05-01T08:51:05.749103191Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.067359ms grafana | logger=migrator t=2024-05-01T08:51:05.753982682Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-05-01T08:51:05.754009183Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.301µs grafana | logger=migrator t=2024-05-01T08:51:05.756983938Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-05-01T08:51:05.758006615Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.022226ms grafana | logger=migrator t=2024-05-01T08:51:05.761066084Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-05-01T08:51:05.761455285Z level=info msg="Migration successfully executed" id="Add missing user data" duration=389.251µs grafana | logger=migrator t=2024-05-01T08:51:05.764695285Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-05-01T08:51:05.76640601Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.709845ms mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-05-01 08:51:08+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-05-01 8:51:08 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Buffer pool(s) dump completed at 240501 8:51:08 mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Shutdown completed; log sequence number 327895; transaction id 298 mariadb | 2024-05-01 8:51:08 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-05-01 08:51:08+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-05-01 08:51:08+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-05-01 8:51:08 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-05-01 8:51:08 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-05-01 8:51:08 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-05-01 8:51:08 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: log sequence number 327895; transaction id 299 mariadb | 2024-05-01 8:51:08 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-05-01 8:51:08 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-05-01 8:51:08 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-05-01 8:51:08 0 [Note] InnoDB: Buffer pool(s) load completed at 240501 8:51:08 mariadb | 2024-05-01 8:51:08 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-05-01 8:51:08 0 [Note] Server socket created on IP: '::'. mariadb | 2024-05-01 8:51:08 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-05-01 8:51:08 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) mariadb | 2024-05-01 8:51:08 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-05-01 8:51:09 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-05-01 8:51:09 38 [Warning] Aborted connection 38 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) grafana | logger=migrator t=2024-05-01T08:51:05.771623079Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-05-01T08:51:05.772336559Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=711.71µs grafana | logger=migrator t=2024-05-01T08:51:05.775303243Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-05-01T08:51:05.776637008Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.332304ms grafana | logger=migrator t=2024-05-01T08:51:05.779910129Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-05-01T08:51:05.789271447Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.362308ms grafana | logger=migrator t=2024-05-01T08:51:05.792497236Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-05-01T08:51:05.793294781Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=797.525µs grafana | logger=migrator t=2024-05-01T08:51:05.799541577Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-05-01T08:51:05.799981291Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=444.254µs grafana | logger=migrator t=2024-05-01T08:51:05.803138296Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-05-01T08:51:05.8037475Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=609.193µs grafana | logger=migrator t=2024-05-01T08:51:05.807191821Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" grafana | logger=migrator t=2024-05-01T08:51:05.807458896Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=267.205µs grafana | logger=migrator t=2024-05-01T08:51:05.810524366Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-05-01T08:51:05.811276837Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=749.941µs grafana | logger=migrator t=2024-05-01T08:51:05.815917875Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-05-01T08:51:05.816499307Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=579.692µs grafana | logger=migrator t=2024-05-01T08:51:05.822785795Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-05-01T08:51:05.823671865Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=887.45µs grafana | logger=migrator t=2024-05-01T08:51:05.826706243Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-05-01T08:51:05.827521618Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=815.555µs grafana | logger=migrator t=2024-05-01T08:51:05.830393817Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-05-01T08:51:05.831221993Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=830.196µs grafana | logger=migrator t=2024-05-01T08:51:05.836017729Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-05-01T08:51:05.836108354Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=92.085µs grafana | logger=migrator t=2024-05-01T08:51:05.83875398Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-05-01T08:51:05.839514622Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=760.672µs grafana | logger=migrator t=2024-05-01T08:51:05.84325333Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-05-01T08:51:05.844022593Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=769.352µs grafana | logger=migrator t=2024-05-01T08:51:05.846572114Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-05-01T08:51:05.847366718Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=789.383µs grafana | logger=migrator t=2024-05-01T08:51:05.852355174Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-05-01T08:51:05.852861722Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=506.748µs grafana | logger=migrator t=2024-05-01T08:51:05.855775174Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-05-01T08:51:05.858032119Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.257105ms grafana | logger=migrator t=2024-05-01T08:51:05.860830464Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-05-01T08:51:05.861490251Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=659.796µs grafana | logger=migrator t=2024-05-01T08:51:05.86670164Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-05-01T08:51:05.867476152Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=775.543µs grafana | logger=migrator t=2024-05-01T08:51:05.872729013Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-05-01T08:51:05.873342627Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=613.774µs grafana | logger=migrator t=2024-05-01T08:51:05.875853277Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-05-01T08:51:05.876401867Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=548.521µs grafana | logger=migrator t=2024-05-01T08:51:05.881478098Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-05-01T08:51:05.882284063Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=806.445µs grafana | logger=migrator t=2024-05-01T08:51:05.885230377Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-05-01T08:51:05.885681272Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=450.965µs grafana | logger=migrator t=2024-05-01T08:51:05.888315548Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-05-01T08:51:05.888918491Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=602.773µs grafana | logger=migrator t=2024-05-01T08:51:05.894201814Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" kafka | [2024-05-01 08:51:06,486] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,486] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,488] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) kafka | [2024-05-01 08:51:06,491] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-05-01 08:51:06,496] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-01 08:51:06,498] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-05-01 08:51:06,502] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-01 08:51:06,509] INFO Socket connection established, initiating session, client: /172.17.0.6:52342, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-01 08:51:06,613] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003d5940001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-05-01 08:51:06,619] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-05-01 08:51:07,074] INFO Cluster ID = sZdrrRZqSOecyf1-XTESVg (kafka.server.KafkaServer) kafka | [2024-05-01 08:51:07,077] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-05-01 08:51:07,130] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 grafana | logger=migrator t=2024-05-01T08:51:05.894626387Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=424.373µs grafana | logger=migrator t=2024-05-01T08:51:05.897306806Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-05-01T08:51:05.898030896Z level=info msg="Migration successfully executed" id="create star table" duration=721.37µs grafana | logger=migrator t=2024-05-01T08:51:05.902089581Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-05-01T08:51:05.902915127Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=825.486µs grafana | logger=migrator t=2024-05-01T08:51:05.906646593Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-05-01T08:51:05.907466849Z level=info msg="Migration successfully executed" id="create org table v1" duration=820.305µs grafana | logger=migrator t=2024-05-01T08:51:05.913124692Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-05-01T08:51:05.913915817Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=793.815µs grafana | logger=migrator t=2024-05-01T08:51:05.921130956Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-05-01T08:51:05.922009335Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=878.939µs grafana | logger=migrator t=2024-05-01T08:51:05.925200232Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-05-01T08:51:05.926025878Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=825.726µs grafana | logger=migrator t=2024-05-01T08:51:05.928999722Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-05-01T08:51:05.929796956Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=792.184µs grafana | logger=migrator t=2024-05-01T08:51:05.934307707Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-05-01T08:51:05.935136012Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=827.875µs grafana | logger=migrator t=2024-05-01T08:51:05.938316899Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-05-01T08:51:05.938410065Z level=info msg="Migration successfully executed" id="Update org table charset" duration=93.227µs grafana | logger=migrator t=2024-05-01T08:51:05.940820307Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-05-01T08:51:05.940909422Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=88.865µs grafana | logger=migrator t=2024-05-01T08:51:05.944207466Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-05-01T08:51:05.944447109Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=239.243µs grafana | logger=migrator t=2024-05-01T08:51:05.948861653Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-05-01T08:51:05.94969576Z level=info msg="Migration successfully executed" id="create dashboard table" duration=833.997µs grafana | logger=migrator t=2024-05-01T08:51:05.952637493Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-05-01T08:51:05.95385317Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.214967ms grafana | logger=migrator t=2024-05-01T08:51:05.957172334Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-05-01T08:51:05.958025582Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=855.368µs grafana | logger=migrator t=2024-05-01T08:51:05.962349131Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-05-01T08:51:05.962900941Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=551.41µs grafana | logger=migrator t=2024-05-01T08:51:05.965845305Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-05-01T08:51:05.966454509Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=605.875µs grafana | logger=migrator t=2024-05-01T08:51:05.969480487Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-05-01T08:51:05.970033237Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=552.93µs grafana | logger=migrator t=2024-05-01T08:51:05.973837368Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-05-01T08:51:05.978848716Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.010988ms grafana | logger=migrator t=2024-05-01T08:51:05.982339559Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-05-01T08:51:05.983316744Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=979.835µs grafana | logger=migrator t=2024-05-01T08:51:05.98631044Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-05-01T08:51:05.986873081Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=562.73µs grafana | logger=migrator t=2024-05-01T08:51:05.993102016Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-05-01T08:51:05.993675457Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=573.581µs grafana | logger=migrator t=2024-05-01T08:51:05.998676265Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-05-01T08:51:05.998939819Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=263.835µs grafana | logger=migrator t=2024-05-01T08:51:06.001410276Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-05-01T08:51:06.001988238Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=578.262µs grafana | logger=migrator t=2024-05-01T08:51:06.007733826Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-05-01T08:51:06.007790099Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=59.283µs grafana | logger=migrator t=2024-05-01T08:51:06.011341287Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-05-01T08:51:06.012676548Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.333391ms grafana | logger=migrator t=2024-05-01T08:51:06.016364532Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-05-01T08:51:06.017624659Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.260617ms grafana | logger=migrator t=2024-05-01T08:51:06.02237608Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.023658148Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.282298ms grafana | logger=migrator t=2024-05-01T08:51:06.026273967Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.026852647Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=578.58µs grafana | logger=migrator t=2024-05-01T08:51:06.028861983Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.031934735Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.071512ms grafana | logger=migrator t=2024-05-01T08:51:06.037641427Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.038202766Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=561.219µs grafana | logger=migrator t=2024-05-01T08:51:06.041062337Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-05-01T08:51:06.041585535Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=523.148µs grafana | logger=migrator t=2024-05-01T08:51:06.046589499Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-05-01T08:51:06.04660953Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=20.701µs grafana | logger=migrator t=2024-05-01T08:51:06.052363274Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-05-01T08:51:06.052384275Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=21.521µs grafana | logger=migrator t=2024-05-01T08:51:06.056321934Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.05778644Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.464657ms grafana | logger=migrator t=2024-05-01T08:51:06.060705635Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.062047126Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.341422ms grafana | logger=migrator t=2024-05-01T08:51:06.067397839Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.068732298Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.33465ms grafana | logger=migrator t=2024-05-01T08:51:06.07121975Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.07256159Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.34175ms grafana | logger=migrator t=2024-05-01T08:51:06.075158898Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.075297715Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=138.827µs grafana | logger=migrator t=2024-05-01T08:51:06.077020156Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-05-01T08:51:06.077551504Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=531.118µs grafana | logger=migrator t=2024-05-01T08:51:06.081937106Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-05-01T08:51:06.082419611Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=482.545µs grafana | logger=migrator t=2024-05-01T08:51:06.086250803Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-05-01T08:51:06.086270484Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=20.221µs grafana | logger=migrator t=2024-05-01T08:51:06.089551347Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-05-01T08:51:06.090113328Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=561.911µs grafana | logger=migrator t=2024-05-01T08:51:06.096495905Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-05-01T08:51:06.097072755Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=577.33µs grafana | logger=migrator t=2024-05-01T08:51:06.099576037Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-05-01T08:51:06.103283733Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.706596ms grafana | logger=migrator t=2024-05-01T08:51:06.106703854Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-05-01T08:51:06.108185852Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.476707ms grafana | logger=migrator t=2024-05-01T08:51:06.11686059Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-05-01T08:51:06.117831242Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=971.502µs grafana | logger=migrator t=2024-05-01T08:51:06.121501375Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-05-01T08:51:06.122365751Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=864.425µs grafana | logger=migrator t=2024-05-01T08:51:06.125186229Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-05-01T08:51:06.125450023Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=263.734µs grafana | logger=migrator t=2024-05-01T08:51:06.129402913Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-05-01T08:51:06.129827895Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=424.772µs grafana | logger=migrator t=2024-05-01T08:51:06.133598194Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-05-01T08:51:06.135783939Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.185105ms grafana | logger=migrator t=2024-05-01T08:51:06.138693663Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-05-01T08:51:06.139632712Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=938.669µs grafana | logger=migrator t=2024-05-01T08:51:06.144038645Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-05-01T08:51:06.144264007Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=225.032µs grafana | logger=migrator t=2024-05-01T08:51:06.149693634Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-05-01T08:51:06.149922036Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=228.252µs grafana | logger=migrator t=2024-05-01T08:51:06.153889175Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-05-01T08:51:06.154868647Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=979.422µs grafana | logger=migrator t=2024-05-01T08:51:06.158662117Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-05-01T08:51:06.161145038Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.484321ms grafana | logger=migrator t=2024-05-01T08:51:06.164074373Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-05-01T08:51:06.165108257Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.033154ms grafana | logger=migrator t=2024-05-01T08:51:06.168451215Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-05-01T08:51:06.170047948Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.596843ms grafana | logger=migrator t=2024-05-01T08:51:06.17442655Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-05-01T08:51:06.175882386Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.455656ms grafana | logger=migrator t=2024-05-01T08:51:06.179451346Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-05-01T08:51:06.180276039Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=822.963µs grafana | logger=migrator t=2024-05-01T08:51:06.183444946Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-05-01T08:51:06.184269009Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=824.373µs grafana | logger=migrator t=2024-05-01T08:51:06.188295613Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-05-01T08:51:06.199450242Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.152239ms grafana | logger=migrator t=2024-05-01T08:51:06.203450242Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-05-01T08:51:06.204546141Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.095689ms grafana | logger=migrator t=2024-05-01T08:51:06.207693097Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-05-01T08:51:06.20871272Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.019263ms grafana | logger=migrator t=2024-05-01T08:51:06.216131462Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-05-01T08:51:06.21722833Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.097008ms grafana | logger=migrator t=2024-05-01T08:51:06.220343084Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-05-01T08:51:06.220966477Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=622.883µs grafana | logger=migrator t=2024-05-01T08:51:06.22406304Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-05-01T08:51:06.225766601Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.703211ms grafana | logger=migrator t=2024-05-01T08:51:06.22954591Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-05-01T08:51:06.231223579Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.678139ms grafana | logger=migrator t=2024-05-01T08:51:06.234337243Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-05-01T08:51:06.234390596Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=53.223µs grafana | logger=migrator t=2024-05-01T08:51:06.237013715Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-05-01T08:51:06.237376214Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=362.509µs grafana | logger=migrator t=2024-05-01T08:51:06.241395247Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-05-01T08:51:06.243838556Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.44319ms grafana | logger=migrator t=2024-05-01T08:51:06.246911978Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-05-01T08:51:06.247196313Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=283.865µs grafana | logger=migrator t=2024-05-01T08:51:06.25054983Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-05-01T08:51:06.250843635Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=293.425µs grafana | logger=migrator t=2024-05-01T08:51:06.253509596Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-05-01T08:51:06.255544224Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.034958ms grafana | logger=migrator t=2024-05-01T08:51:06.614473527Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-05-01T08:51:06.61567537Z level=info msg="Migration successfully executed" id="Update uid value" duration=1.198223ms grafana | logger=migrator t=2024-05-01T08:51:06.621744981Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-05-01T08:51:06.622557294Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=812.453µs grafana | logger=migrator t=2024-05-01T08:51:06.626094771Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-05-01T08:51:06.626899983Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=805.062µs grafana | logger=migrator t=2024-05-01T08:51:06.630786839Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-05-01T08:51:06.631652794Z level=info msg="Migration successfully executed" id="create api_key table" duration=865.855µs grafana | logger=migrator t=2024-05-01T08:51:06.635863156Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-05-01T08:51:06.637148655Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.292358ms grafana | logger=migrator t=2024-05-01T08:51:06.640304281Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-05-01T08:51:06.640994567Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=689.826µs grafana | logger=migrator t=2024-05-01T08:51:06.645319716Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-05-01T08:51:06.645991751Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=671.765µs grafana | logger=migrator t=2024-05-01T08:51:06.652646712Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-05-01T08:51:06.653636975Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=992.013µs grafana | logger=migrator t=2024-05-01T08:51:06.656746969Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-05-01T08:51:06.657629756Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=882.907µs grafana | logger=migrator t=2024-05-01T08:51:06.661650249Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-05-01T08:51:06.662499823Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=900.977µs grafana | logger=migrator t=2024-05-01T08:51:06.667864017Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-05-01T08:51:06.710764892Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=42.895665ms grafana | logger=migrator t=2024-05-01T08:51:06.747741924Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-05-01T08:51:06.750033315Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=2.292691ms grafana | logger=migrator t=2024-05-01T08:51:06.757066927Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-05-01T08:51:06.757892991Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=826.223µs grafana | logger=migrator t=2024-05-01T08:51:06.763843895Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-05-01T08:51:06.765211827Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.368043ms grafana | logger=migrator t=2024-05-01T08:51:06.770523838Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-05-01T08:51:06.771402554Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=878.526µs grafana | logger=migrator t=2024-05-01T08:51:06.775177993Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-05-01T08:51:06.775622206Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=441.583µs grafana | logger=migrator t=2024-05-01T08:51:06.781966172Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-05-01T08:51:06.783018057Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.053955ms grafana | logger=migrator t=2024-05-01T08:51:06.814606795Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-05-01T08:51:06.814668538Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=66.363µs grafana | logger=migrator t=2024-05-01T08:51:06.82019202Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-05-01T08:51:06.824821815Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.628175ms grafana | logger=migrator t=2024-05-01T08:51:06.851182136Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-05-01T08:51:06.854031937Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.852341ms grafana | logger=migrator t=2024-05-01T08:51:06.861502592Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-05-01T08:51:06.86185082Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=347.608µs grafana | logger=migrator t=2024-05-01T08:51:06.865627159Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-05-01T08:51:06.87150668Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=5.880421ms grafana | logger=migrator t=2024-05-01T08:51:06.877494067Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-05-01T08:51:06.882259167Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=4.767861ms grafana | logger=migrator t=2024-05-01T08:51:06.887876694Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-05-01T08:51:06.889156892Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.279738ms grafana | logger=migrator t=2024-05-01T08:51:06.894968049Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-05-01T08:51:06.89556689Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=598.671µs grafana | logger=migrator t=2024-05-01T08:51:06.920807443Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-05-01T08:51:06.922335484Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.526361ms grafana | logger=migrator t=2024-05-01T08:51:06.981280457Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-05-01T08:51:06.98267068Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.411884ms grafana | logger=migrator t=2024-05-01T08:51:06.986983368Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-05-01T08:51:06.987812702Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=830.434µs grafana | logger=migrator t=2024-05-01T08:51:06.992416355Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-05-01T08:51:06.993256069Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=836.564µs grafana | logger=migrator t=2024-05-01T08:51:06.997372507Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-05-01T08:51:06.997439711Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=67.934µs grafana | logger=migrator t=2024-05-01T08:51:07.000551305Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-05-01T08:51:07.000579386Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=29.171µs grafana | logger=migrator t=2024-05-01T08:51:07.008029583Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-05-01T08:51:07.010839365Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.811013ms grafana | logger=migrator t=2024-05-01T08:51:07.014238941Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-05-01T08:51:07.017045605Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.806094ms grafana | logger=migrator t=2024-05-01T08:51:07.020973141Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-05-01T08:51:07.021037325Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=65.003µs grafana | logger=migrator t=2024-05-01T08:51:07.023911001Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-05-01T08:51:07.024616081Z level=info msg="Migration successfully executed" id="create quota table v1" duration=702.75µs grafana | logger=migrator t=2024-05-01T08:51:07.028812164Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-05-01T08:51:07.032202589Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=3.390125ms grafana | logger=migrator t=2024-05-01T08:51:07.069889723Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-05-01T08:51:07.069928495Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.452µs grafana | logger=migrator t=2024-05-01T08:51:07.07467915Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-05-01T08:51:07.075351768Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=673.399µs grafana | logger=migrator t=2024-05-01T08:51:07.114008629Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-05-01T08:51:07.115698416Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.692408ms grafana | logger=migrator t=2024-05-01T08:51:07.120902327Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-05-01T08:51:07.123761602Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.860555ms grafana | logger=migrator t=2024-05-01T08:51:07.128383748Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-05-01T08:51:07.128437621Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=58.674µs grafana | logger=migrator t=2024-05-01T08:51:07.131985656Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-05-01T08:51:07.133369476Z level=info msg="Migration successfully executed" id="create session table" duration=1.384049ms grafana | logger=migrator t=2024-05-01T08:51:07.139873041Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-05-01T08:51:07.139977597Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=105.186µs grafana | logger=migrator t=2024-05-01T08:51:07.143438647Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-05-01T08:51:07.143521522Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=83.555µs grafana | logger=migrator t=2024-05-01T08:51:07.150005136Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-05-01T08:51:07.151208705Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.204309ms grafana | logger=migrator t=2024-05-01T08:51:07.155660922Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-05-01T08:51:07.156372483Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=711.71µs grafana | logger=migrator t=2024-05-01T08:51:07.166116165Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-05-01T08:51:07.166161888Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=46.953µs grafana | logger=migrator t=2024-05-01T08:51:07.172193036Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-05-01T08:51:07.172236938Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=44.682µs grafana | logger=migrator t=2024-05-01T08:51:07.175197529Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-05-01T08:51:07.179166658Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.968448ms grafana | logger=migrator t=2024-05-01T08:51:07.185137963Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-05-01T08:51:07.190000833Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.86185ms grafana | logger=migrator t=2024-05-01T08:51:07.193356036Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-05-01T08:51:07.193466033Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=112.127µs grafana | logger=migrator t=2024-05-01T08:51:07.197635203Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-05-01T08:51:07.197724679Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=90.446µs grafana | logger=migrator t=2024-05-01T08:51:07.201293584Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-05-01T08:51:07.202132453Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=839.279µs grafana | logger=migrator t=2024-05-01T08:51:07.207888215Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-05-01T08:51:07.207983781Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=95.706µs grafana | logger=migrator t=2024-05-01T08:51:07.212391385Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-05-01T08:51:07.216709154Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.31924ms grafana | logger=migrator t=2024-05-01T08:51:07.224847784Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-05-01T08:51:07.22513471Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=287.606µs grafana | logger=migrator t=2024-05-01T08:51:07.230416185Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-05-01T08:51:07.235237352Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.821787ms grafana | logger=migrator t=2024-05-01T08:51:07.238128529Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-05-01T08:51:07.241174246Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.045177ms policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.2:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.6:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-05-01T08:51:40.777+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-05-01T08:51:40.943+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-be7c28cf-ee32-4168-825d-edc2db369b35-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = be7c28cf-ee32-4168-825d-edc2db369b35 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] grafana | logger=migrator t=2024-05-01T08:51:07.245180767Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-05-01T08:51:07.245265081Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=84.695µs grafana | logger=migrator t=2024-05-01T08:51:07.248124176Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-05-01T08:51:07.249123694Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=998.797µs grafana | logger=migrator t=2024-05-01T08:51:07.254861445Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-05-01T08:51:07.256203503Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.342128ms grafana | logger=migrator t=2024-05-01T08:51:07.259720775Z level=info msg="Executing migration" id="create alert table v1" grafana | logger=migrator t=2024-05-01T08:51:07.261560142Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.837996ms grafana | logger=migrator t=2024-05-01T08:51:07.267585449Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-05-01T08:51:07.268545714Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=961.585µs grafana | logger=migrator t=2024-05-01T08:51:07.271359297Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-05-01T08:51:07.272137432Z level=info msg="Migration successfully executed" id="add index alert state" duration=777.886µs grafana | logger=migrator t=2024-05-01T08:51:07.27469971Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-05-01T08:51:07.275515437Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=815.687µs grafana | logger=migrator t=2024-05-01T08:51:07.282503399Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-05-01T08:51:07.283674188Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.2034ms grafana | logger=migrator t=2024-05-01T08:51:07.290075637Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-05-01T08:51:07.29134746Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.274133ms grafana | logger=migrator t=2024-05-01T08:51:07.297590451Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-05-01T08:51:07.298543915Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=953.265µs grafana | logger=migrator t=2024-05-01T08:51:07.305152126Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-05-01T08:51:07.315324653Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.171897ms grafana | logger=migrator t=2024-05-01T08:51:07.325074355Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-05-01T08:51:07.326946344Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.871538ms grafana | logger=migrator t=2024-05-01T08:51:07.333049586Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-05-01T08:51:07.334076896Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.027269ms grafana | logger=migrator t=2024-05-01T08:51:07.342276299Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-05-01T08:51:07.342849091Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=571.923µs grafana | logger=migrator t=2024-05-01T08:51:07.350420419Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-05-01T08:51:07.351346512Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=925.133µs grafana | logger=migrator t=2024-05-01T08:51:07.360535182Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-05-01T08:51:07.362096772Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.561701ms grafana | logger=migrator t=2024-05-01T08:51:07.371708146Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-05-01T08:51:07.378386702Z level=info msg="Migration successfully executed" id="Add column is_default" duration=6.678246ms grafana | logger=migrator t=2024-05-01T08:51:07.38564619Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-05-01T08:51:07.389591319Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.942268ms grafana | logger=migrator t=2024-05-01T08:51:07.398355324Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-05-01T08:51:07.404122067Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.749222ms grafana | logger=migrator t=2024-05-01T08:51:07.41077121Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-05-01T08:51:07.41458628Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.814639ms grafana | logger=migrator t=2024-05-01T08:51:07.419718136Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-05-01T08:51:07.420893464Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.174938ms grafana | logger=migrator t=2024-05-01T08:51:07.430049192Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-05-01T08:51:07.430142398Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=94.225µs grafana | logger=migrator t=2024-05-01T08:51:07.438945205Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-05-01T08:51:07.439062012Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=115.637µs grafana | logger=migrator t=2024-05-01T08:51:07.447305878Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-05-01T08:51:07.448701169Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.394391ms grafana | logger=migrator t=2024-05-01T08:51:07.454990041Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-05-01T08:51:07.456438194Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.450403ms grafana | logger=migrator t=2024-05-01T08:51:07.462654194Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-05-01T08:51:07.464429495Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.775712ms policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) grafana | logger=migrator t=2024-05-01T08:51:07.471624531Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-05-01T08:51:07.472585006Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=952.464µs grafana | logger=migrator t=2024-05-01T08:51:07.478963524Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-05-01T08:51:07.480547916Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.583621ms grafana | logger=migrator t=2024-05-01T08:51:07.491265814Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-05-01T08:51:07.497236989Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.972154ms grafana | logger=migrator t=2024-05-01T08:51:07.501796292Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-05-01T08:51:07.505458863Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.662641ms grafana | logger=migrator t=2024-05-01T08:51:07.511490811Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-05-01T08:51:07.511732185Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=241.014µs grafana | logger=migrator t=2024-05-01T08:51:07.531984903Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-05-01T08:51:07.532965Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=983.427µs grafana | logger=migrator t=2024-05-01T08:51:07.536990722Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-05-01T08:51:07.537700923Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=710.03µs grafana | logger=migrator t=2024-05-01T08:51:07.544959692Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-05-01T08:51:07.548524797Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.563306ms grafana | logger=migrator t=2024-05-01T08:51:07.557603241Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-05-01T08:51:07.557771811Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=168.67µs grafana | logger=migrator t=2024-05-01T08:51:07.562985051Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-05-01T08:51:07.564007261Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.02251ms grafana | logger=migrator t=2024-05-01T08:51:07.569990376Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-05-01T08:51:07.571930188Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.939112ms grafana | logger=migrator t=2024-05-01T08:51:07.582800204Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-05-01T08:51:07.583000307Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=198.473µs grafana | logger=migrator t=2024-05-01T08:51:07.592438221Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-05-01T08:51:07.593489951Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.05099ms grafana | logger=migrator t=2024-05-01T08:51:07.598078366Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-05-01T08:51:07.599162039Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.086262ms grafana | logger=migrator t=2024-05-01T08:51:07.609677075Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-05-01T08:51:07.610690954Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.013878ms grafana | logger=migrator t=2024-05-01T08:51:07.620638798Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-05-01T08:51:07.621690318Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.04807ms grafana | logger=migrator t=2024-05-01T08:51:07.625046563Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-05-01T08:51:07.626154006Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.107123ms grafana | logger=migrator t=2024-05-01T08:51:07.633250515Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-05-01T08:51:07.63437436Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.123825ms grafana | logger=migrator t=2024-05-01T08:51:07.640319783Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-05-01T08:51:07.640346135Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.072µs grafana | logger=migrator t=2024-05-01T08:51:07.644146114Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.648296444Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.14817ms grafana | logger=migrator t=2024-05-01T08:51:07.655306998Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-05-01T08:51:07.656349438Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.04248ms grafana | logger=migrator t=2024-05-01T08:51:07.661848206Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.666661663Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.812666ms grafana | logger=migrator t=2024-05-01T08:51:07.675899596Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-05-01T08:51:07.677174479Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.275063ms grafana | logger=migrator t=2024-05-01T08:51:07.685187712Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-05-01T08:51:07.686981716Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.782503ms grafana | logger=migrator t=2024-05-01T08:51:07.690442545Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-05-01T08:51:07.691975443Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.546099ms grafana | logger=migrator t=2024-05-01T08:51:07.698443847Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null grafana | logger=migrator t=2024-05-01T08:51:07.71098425Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.537483ms grafana | logger=migrator t=2024-05-01T08:51:07.716369961Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-05-01T08:51:07.717071011Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=700.62µs grafana | logger=migrator t=2024-05-01T08:51:07.722353226Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-05-01T08:51:07.72397803Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.624344ms grafana | logger=migrator t=2024-05-01T08:51:07.732532534Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-05-01T08:51:07.733149509Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=617.496µs grafana | logger=migrator t=2024-05-01T08:51:07.740029816Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-05-01T08:51:07.741199513Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.168567ms grafana | logger=migrator t=2024-05-01T08:51:07.74702544Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-05-01T08:51:07.747345828Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=320.059µs grafana | logger=migrator t=2024-05-01T08:51:07.75327618Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.759740714Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.464953ms grafana | logger=migrator t=2024-05-01T08:51:07.767564404Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.774351516Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=6.784732ms grafana | logger=migrator t=2024-05-01T08:51:07.788263289Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.789948866Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.687166ms grafana | logger=migrator t=2024-05-01T08:51:07.796300553Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.797221676Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=920.402µs grafana | logger=migrator t=2024-05-01T08:51:07.802335861Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-05-01T08:51:07.802713412Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=373.861µs grafana | logger=migrator t=2024-05-01T08:51:07.806618297Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-05-01T08:51:07.814899365Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=8.279828ms grafana | logger=migrator t=2024-05-01T08:51:07.827779009Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-05-01T08:51:07.829207991Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.426342ms grafana | logger=migrator t=2024-05-01T08:51:07.836674672Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-05-01T08:51:07.836985659Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=315.328µs grafana | logger=migrator t=2024-05-01T08:51:07.841865011Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-05-01T08:51:07.842344919Z level=info msg="Migration successfully executed" id="Move region to single row" duration=479.598µs grafana | logger=migrator t=2024-05-01T08:51:07.854347641Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.85571392Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.365949ms grafana | logger=migrator t=2024-05-01T08:51:07.865900018Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.867067355Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.168827ms grafana | logger=migrator t=2024-05-01T08:51:07.874353776Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.8756497Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.299994ms grafana | logger=migrator t=2024-05-01T08:51:07.882381178Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.883731157Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.351429ms grafana | logger=migrator t=2024-05-01T08:51:07.886516587Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.887656363Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.141406ms grafana | logger=migrator t=2024-05-01T08:51:07.894613175Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-05-01T08:51:07.899741411Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=5.127276ms grafana | logger=migrator t=2024-05-01T08:51:07.91204088Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-05-01T08:51:07.912146026Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=110.576µs grafana | logger=migrator t=2024-05-01T08:51:07.917844635Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-05-01T08:51:07.918565366Z level=info msg="Migration successfully executed" id="create test_data table" duration=721.882µs grafana | logger=migrator t=2024-05-01T08:51:07.927031504Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-05-01T08:51:07.927915836Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=884.452µs grafana | logger=migrator t=2024-05-01T08:51:07.935122751Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-05-01T08:51:07.936498071Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.37525ms grafana | logger=migrator t=2024-05-01T08:51:07.946740491Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-05-01T08:51:07.947646864Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=906.363µs grafana | logger=migrator t=2024-05-01T08:51:07.958366363Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-05-01T08:51:07.958736384Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=392.773µs grafana | logger=migrator t=2024-05-01T08:51:07.966902845Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-05-01T08:51:07.967441556Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=543.451µs grafana | logger=migrator t=2024-05-01T08:51:07.977445023Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-05-01T08:51:07.977537428Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=94.495µs grafana | logger=migrator t=2024-05-01T08:51:07.98328414Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-05-01T08:51:07.983900006Z level=info msg="Migration successfully executed" id="create team table" duration=616.496µs grafana | logger=migrator t=2024-05-01T08:51:07.995799372Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-05-01T08:51:07.997508271Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.698378ms grafana | logger=migrator t=2024-05-01T08:51:08.001667761Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-05-01T08:51:08.002363631Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=695.03µs grafana | logger=migrator t=2024-05-01T08:51:08.009450754Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-05-01T08:51:08.014183556Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.729802ms grafana | logger=migrator t=2024-05-01T08:51:08.023556331Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-05-01T08:51:08.023887149Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=338.168µs grafana | logger=migrator t=2024-05-01T08:51:08.028665351Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-05-01T08:51:41.092+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-05-01T08:51:41.092+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-05-01T08:51:41.092+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553501090 policy-apex-pdp | [2024-05-01T08:51:41.094+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-1, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-05-01T08:51:41.104+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-05-01T08:51:41.105+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-05-01T08:51:41.106+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=be7c28cf-ee32-4168-825d-edc2db369b35, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-05-01T08:51:41.124+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2 policy-apex-pdp | client.rack = kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = be7c28cf-ee32-4168-825d-edc2db369b35 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-05-01T08:51:08.02974536Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.080129ms grafana | logger=migrator t=2024-05-01T08:51:08.033170857Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-05-01T08:51:08.033973482Z level=info msg="Migration successfully executed" id="create team member table" duration=802.775µs grafana | logger=migrator t=2024-05-01T08:51:08.042277446Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-05-01T08:51:08.043290093Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.012516ms grafana | logger=migrator t=2024-05-01T08:51:08.050936592Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-05-01T08:51:08.051852692Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=915.65µs grafana | logger=migrator t=2024-05-01T08:51:08.058413101Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-05-01T08:51:08.059344583Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=931.492µs grafana | logger=migrator t=2024-05-01T08:51:08.06403614Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-05-01T08:51:08.067402874Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.366544ms grafana | logger=migrator t=2024-05-01T08:51:08.071824227Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-05-01T08:51:08.076346445Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.523068ms grafana | logger=migrator t=2024-05-01T08:51:08.117277629Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-05-01T08:51:08.123949476Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=6.674016ms grafana | logger=migrator t=2024-05-01T08:51:08.128112733Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-05-01T08:51:08.12878219Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=669.237µs grafana | logger=migrator t=2024-05-01T08:51:08.134163285Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-05-01T08:51:08.135355781Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.196266ms grafana | logger=migrator t=2024-05-01T08:51:08.139688518Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-05-01T08:51:08.140731036Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.041638ms grafana | logger=migrator t=2024-05-01T08:51:08.148702682Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-05-01T08:51:08.149699837Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=996.845µs grafana | logger=migrator t=2024-05-01T08:51:08.152976007Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-05-01T08:51:08.154389555Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.411398ms grafana | logger=migrator t=2024-05-01T08:51:08.160751973Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-05-01T08:51:08.162259855Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.504442ms grafana | logger=migrator t=2024-05-01T08:51:08.165735437Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-05-01T08:51:08.166946003Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.209766ms grafana | logger=migrator t=2024-05-01T08:51:08.171405478Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-05-01T08:51:08.172337138Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=931.71µs grafana | logger=migrator t=2024-05-01T08:51:08.176608142Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-05-01T08:51:08.177115901Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=507.799µs grafana | logger=migrator t=2024-05-01T08:51:08.181400125Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-05-01T08:51:08.181630068Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=232.213µs grafana | logger=migrator t=2024-05-01T08:51:08.187378153Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-05-01T08:51:08.188109494Z level=info msg="Migration successfully executed" id="create tag table" duration=735.651µs grafana | logger=migrator t=2024-05-01T08:51:08.192393508Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-05-01T08:51:08.193110117Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=719.479µs grafana | logger=migrator t=2024-05-01T08:51:08.198491642Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-05-01T08:51:08.199221793Z level=info msg="Migration successfully executed" id="create login attempt table" duration=730.011µs grafana | logger=migrator t=2024-05-01T08:51:08.202474641Z level=info msg="Executing migration" id="add index login_attempt.username" grafana | logger=migrator t=2024-05-01T08:51:08.20336544Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=888.659µs grafana | logger=migrator t=2024-05-01T08:51:08.208870482Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-05-01T08:51:08.20976428Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=893.938µs grafana | logger=migrator t=2024-05-01T08:51:08.214432087Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-05-01T08:51:08.228831906Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.39759ms grafana | logger=migrator t=2024-05-01T08:51:08.231876523Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-05-01T08:51:08.232419012Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=542.299µs grafana | logger=migrator t=2024-05-01T08:51:08.235375074Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-05-01T08:51:08.236255123Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=877.569µs grafana | logger=migrator t=2024-05-01T08:51:08.240721018Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-05-01T08:51:08.241334492Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=617.085µs grafana | logger=migrator t=2024-05-01T08:51:08.244348237Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-05-01T08:51:08.245359692Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.015845ms grafana | logger=migrator t=2024-05-01T08:51:08.248461562Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-05-01T08:51:08.249263746Z level=info msg="Migration successfully executed" id="create user auth table" duration=801.754µs grafana | logger=migrator t=2024-05-01T08:51:08.253179021Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-05-01T08:51:08.254085821Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=906.5µs grafana | logger=migrator t=2024-05-01T08:51:08.258311143Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-05-01T08:51:08.258410878Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=100.455µs grafana | logger=migrator t=2024-05-01T08:51:08.268704722Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-05-01T08:51:08.276450147Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.746535ms grafana | logger=migrator t=2024-05-01T08:51:08.28799141Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-05-01T08:51:08.294480816Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=6.493337ms grafana | logger=migrator t=2024-05-01T08:51:08.305495009Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-05-01T08:51:08.312041089Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=6.5461ms grafana | logger=migrator t=2024-05-01T08:51:08.320123322Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-05-01T08:51:08.328517272Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.39779ms grafana | logger=migrator t=2024-05-01T08:51:08.337952789Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-05-01T08:51:08.341343635Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=3.380905ms grafana | logger=migrator t=2024-05-01T08:51:08.381208491Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-05-01T08:51:08.387903859Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.700998ms grafana | logger=migrator t=2024-05-01T08:51:08.392078557Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-05-01T08:51:08.392754845Z level=info msg="Migration successfully executed" id="create server_lock table" duration=675.788µs grafana | logger=migrator t=2024-05-01T08:51:08.397762189Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2024-05-01T08:51:08.399361587Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.598929ms grafana | logger=migrator t=2024-05-01T08:51:08.404288647Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2024-05-01T08:51:08.406489278Z level=info msg="Migration successfully executed" id="create user auth token table" duration=2.20064ms grafana | logger=migrator t=2024-05-01T08:51:08.414186689Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2024-05-01T08:51:08.41619458Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.010741ms grafana | logger=migrator t=2024-05-01T08:51:08.420694757Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2024-05-01T08:51:08.422477654Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.781787ms grafana | logger=migrator t=2024-05-01T08:51:08.428072291Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2024-05-01T08:51:08.429237735Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.165474ms grafana | logger=migrator t=2024-05-01T08:51:08.432554687Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2024-05-01T08:51:08.44265548Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=10.094823ms grafana | logger=migrator t=2024-05-01T08:51:08.446922965Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2024-05-01T08:51:08.447638924Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=713.478µs grafana | logger=migrator t=2024-05-01T08:51:08.450752554Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2024-05-01T08:51:08.451664105Z level=info msg="Migration successfully executed" id="create cache_data table" duration=909.861µs grafana | logger=migrator t=2024-05-01T08:51:08.455023999Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2024-05-01T08:51:08.456598575Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.575146ms grafana | logger=migrator t=2024-05-01T08:51:08.460293928Z level=info msg="Executing migration" id="create short_url table v1" grafana | logger=migrator t=2024-05-01T08:51:08.461159765Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=865.787µs grafana | logger=migrator t=2024-05-01T08:51:08.465233658Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2024-05-01T08:51:08.466253115Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.019167ms grafana | logger=migrator t=2024-05-01T08:51:08.469689133Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2024-05-01T08:51:08.469789548Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=101.225µs grafana | logger=migrator t=2024-05-01T08:51:08.473261119Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2024-05-01T08:51:08.473387545Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=126.607µs grafana | logger=migrator t=2024-05-01T08:51:08.479075007Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2024-05-01T08:51:08.480460154Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.385396ms grafana | logger=migrator t=2024-05-01T08:51:08.484712676Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-05-01T08:51:08.48641649Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.702974ms grafana | logger=migrator t=2024-05-01T08:51:08.490882385Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-05-01T08:51:08.492087571Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.204916ms grafana | logger=migrator t=2024-05-01T08:51:08.530658726Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2024-05-01T08:51:08.530801234Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=146.328µs grafana | logger=migrator t=2024-05-01T08:51:08.537931004Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-05-01T08:51:08.539783816Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.857542ms grafana | logger=migrator t=2024-05-01T08:51:08.543582484Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-05-01T08:51:08.544574579Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=992.035µs grafana | logger=migrator t=2024-05-01T08:51:08.552450461Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-05-01T08:51:08.554148754Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.697843ms kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-05-01T08:51:41.133+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-05-01T08:51:41.133+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-05-01T08:51:41.133+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553501133 policy-apex-pdp | [2024-05-01T08:51:41.134+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-05-01T08:51:41.134+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7476620c-2335-4bd4-a15a-26d70a5546c2, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-05-01T08:51:41.145+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 grafana | logger=migrator t=2024-05-01T08:51:08.558274911Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-05-01T08:51:08.559778443Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.503731ms grafana | logger=migrator t=2024-05-01T08:51:08.564022226Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2024-05-01T08:51:08.569861486Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.83875ms grafana | logger=migrator t=2024-05-01T08:51:08.575558268Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2024-05-01T08:51:08.576562943Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.004125ms grafana | logger=migrator t=2024-05-01T08:51:08.581419359Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2024-05-01T08:51:08.581512525Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=94.085µs grafana | logger=migrator t=2024-05-01T08:51:08.588442864Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2024-05-01T08:51:08.589960268Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.516514ms grafana | logger=migrator t=2024-05-01T08:51:08.595259558Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2024-05-01T08:51:08.59711067Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.849522ms grafana | logger=migrator t=2024-05-01T08:51:08.601985807Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2024-05-01T08:51:08.60313325Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.141503ms grafana | logger=migrator t=2024-05-01T08:51:08.607644297Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-05-01T08:51:08.607810856Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=166.539µs grafana | logger=migrator t=2024-05-01T08:51:08.612313243Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2024-05-01T08:51:08.614036758Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.721685ms grafana | logger=migrator t=2024-05-01T08:51:08.621484136Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2024-05-01T08:51:08.622496972Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.012186ms grafana | logger=migrator t=2024-05-01T08:51:08.627147037Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2024-05-01T08:51:08.628422717Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.271211ms grafana | logger=migrator t=2024-05-01T08:51:08.632242636Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2024-05-01T08:51:08.633330475Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.089899ms grafana | logger=migrator t=2024-05-01T08:51:08.637808971Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2024-05-01T08:51:08.643681883Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.871032ms grafana | logger=migrator t=2024-05-01T08:51:08.648880518Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2024-05-01T08:51:08.649909245Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.028246ms grafana | logger=migrator t=2024-05-01T08:51:08.654070003Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-05-01T08:51:08.655065337Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=994.714µs grafana | logger=migrator t=2024-05-01T08:51:08.66131933Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2024-05-01T08:51:08.68847964Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.157689ms grafana | logger=migrator t=2024-05-01T08:51:08.692193483Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2024-05-01T08:51:08.717020104Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=24.826311ms grafana | logger=migrator t=2024-05-01T08:51:08.720280374Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2024-05-01T08:51:08.7213083Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.025466ms grafana | logger=migrator t=2024-05-01T08:51:08.725606695Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-05-01T08:51:08.72659224Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=985.005µs grafana | logger=migrator t=2024-05-01T08:51:08.731283806Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2024-05-01T08:51:08.736979929Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.695733ms grafana | logger=migrator t=2024-05-01T08:51:08.740992289Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2024-05-01T08:51:08.747539198Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=6.548929ms grafana | logger=migrator t=2024-05-01T08:51:08.752073917Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2024-05-01T08:51:08.753109103Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.034946ms grafana | logger=migrator t=2024-05-01T08:51:08.756224514Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-05-01T08:51:08.757347326Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.121732ms grafana | logger=migrator t=2024-05-01T08:51:08.760489298Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2024-05-01T08:51:08.761519075Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.028746ms grafana | logger=migrator t=2024-05-01T08:51:08.765757267Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2024-05-01T08:51:08.766831726Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.073909ms grafana | logger=migrator t=2024-05-01T08:51:08.770109856Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2024-05-01T08:51:08.770176909Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=67.494µs grafana | logger=migrator t=2024-05-01T08:51:08.773551945Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2024-05-01T08:51:08.784139265Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=10.583329ms grafana | logger=migrator t=2024-05-01T08:51:08.788567078Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2024-05-01T08:51:08.794472112Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.899933ms grafana | logger=migrator t=2024-05-01T08:51:08.799218822Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2024-05-01T08:51:08.807624493Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=8.404441ms grafana | logger=migrator t=2024-05-01T08:51:08.811915428Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2024-05-01T08:51:08.812673789Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=759.201µs grafana | logger=migrator t=2024-05-01T08:51:08.818495069Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2024-05-01T08:51:08.819707555Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.215856ms grafana | logger=migrator t=2024-05-01T08:51:08.823751697Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2024-05-01T08:51:08.830017991Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.265884ms grafana | logger=migrator t=2024-05-01T08:51:08.833251008Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.10) policy-api | policy-api | [2024-05-01T08:51:17.235+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-api | [2024-05-01T08:51:17.303+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 17 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-05-01T08:51:17.304+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-05-01T08:51:19.270+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-05-01T08:51:19.355+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 75 ms. Found 6 JPA repository interfaces. policy-api | [2024-05-01T08:51:19.785+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-05-01T08:51:19.786+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-05-01T08:51:20.443+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-05-01T08:51:20.458+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-05-01T08:51:20.460+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-05-01T08:51:20.460+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-api | [2024-05-01T08:51:20.572+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-05-01T08:51:20.572+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3195 ms policy-api | [2024-05-01T08:51:21.032+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-05-01T08:51:21.108+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final policy-api | [2024-05-01T08:51:21.162+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-05-01T08:51:21.497+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer prometheus | ts=2024-05-01T08:51:03.417Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-05-01T08:51:03.417Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" prometheus | ts=2024-05-01T08:51:03.418Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-05-01T08:51:03.418Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-05-01T08:51:03.418Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-05-01T08:51:03.418Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-05-01T08:51:03.423Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-05-01T08:51:03.424Z caller=main.go:1129 level=info msg="Starting TSDB ..." prometheus | ts=2024-05-01T08:51:03.425Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-05-01T08:51:03.425Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-05-01T08:51:03.428Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-05-01T08:51:03.428Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.41µs prometheus | ts=2024-05-01T08:51:03.428Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-05-01T08:51:03.429Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-05-01T08:51:03.429Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=22.681µs wal_replay_duration=625.883µs wbl_replay_duration=210ns total_replay_duration=674.445µs prometheus | ts=2024-05-01T08:51:03.434Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-05-01T08:51:03.434Z caller=main.go:1153 level=info msg="TSDB started" prometheus | ts=2024-05-01T08:51:03.434Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-05-01T08:51:03.435Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.043125ms db_storage=1.25µs remote_storage=2.23µs web_handler=710ns query_engine=940ns scrape=257.083µs scrape_sd=118.386µs notify=35.792µs notify_sd=12.31µs rules=1.85µs tracing=6.711µs prometheus | ts=2024-05-01T08:51:03.435Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-05-01T08:51:03.435Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-05-01T08:51:41.167+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-05-01T08:51:41.185+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-05-01T08:51:41.185+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-05-01T08:51:41.185+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553501185 policy-apex-pdp | [2024-05-01T08:51:41.185+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7476620c-2335-4bd4-a15a-26d70a5546c2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-05-01T08:51:41.185+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-05-01T08:51:41.185+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-05-01T08:51:41.187+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-05-01T08:51:41.187+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-05-01T08:51:41.189+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-05-01T08:51:41.189+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-05-01T08:51:41.189+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-05-01T08:51:41.189+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=be7c28cf-ee32-4168-825d-edc2db369b35, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a policy-apex-pdp | [2024-05-01T08:51:41.189+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=be7c28cf-ee32-4168-825d-edc2db369b35, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-05-01T08:51:41.189+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-05-01T08:51:41.201+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-05-01T08:51:41.202+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"321d3b93-aad1-48d7-a1fa-445de02635e9","timestampMs":1714553501189,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-apex-pdp | [2024-05-01T08:51:41.346+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-05-01T08:51:41.346+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-05-01T08:51:41.346+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-05-01T08:51:41.346+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-05-01T08:51:41.355+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-05-01T08:51:41.356+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-05-01T08:51:41.356+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2024-05-01T08:51:41.356+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-05-01T08:51:41.488+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Cluster ID: sZdrrRZqSOecyf1-XTESVg policy-apex-pdp | [2024-05-01T08:51:41.488+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: sZdrrRZqSOecyf1-XTESVg policy-apex-pdp | [2024-05-01T08:51:41.489+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-05-01T08:51:41.495+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] (Re-)joining group policy-apex-pdp | [2024-05-01T08:51:41.498+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-05-01T08:51:41.511+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Request joining group due to: need to re-join with the given member-id: consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2-5a287fc4-d6ce-4d9e-9f70-f65617628610 policy-apex-pdp | [2024-05-01T08:51:41.511+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-05-01T08:51:41.511+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] (Re-)joining group policy-apex-pdp | [2024-05-01T08:51:41.943+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-05-01T08:51:41.943+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-05-01T08:51:44.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Successfully joined group with generation Generation{generationId=1, memberId='consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2-5a287fc4-d6ce-4d9e-9f70-f65617628610', protocol='range'} policy-apex-pdp | [2024-05-01T08:51:44.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Finished assignment for group at generation 1: {consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2-5a287fc4-d6ce-4d9e-9f70-f65617628610=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-05-01T08:51:44.542+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Successfully synced group in generation Generation{generationId=1, memberId='consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2-5a287fc4-d6ce-4d9e-9f70-f65617628610', protocol='range'} policy-apex-pdp | [2024-05-01T08:51:44.542+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-05-01T08:51:44.544+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-05-01T08:51:44.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-05-01T08:51:44.567+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2, groupId=be7c28cf-ee32-4168-825d-edc2db369b35] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-05-01T08:51:56.154+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.4 - policyadmin [01/May/2024:08:51:56 +0000] "GET /metrics HTTP/1.1" 200 10639 "-" "Prometheus/2.51.2" policy-apex-pdp | [2024-05-01T08:52:01.190+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1e6bf0b7-d9d0-4154-a5cb-b1f23bc870f0","timestampMs":1714553521189,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-01T08:52:01.208+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1e6bf0b7-d9d0-4154-a5cb-b1f23bc870f0","timestampMs":1714553521189,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-01T08:52:01.211+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-01T08:52:01.347+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"66651ea1-ddd7-4463-a227-44f2c500a1c1","timestampMs":1714553521291,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.362+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-05-01T08:52:01.363+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"66651ea1-ddd7-4463-a227-44f2c500a1c1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"150c59f6-5f2c-4f25-93e4-5fc7ab96dd2c","timestampMs":1714553521363,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.363+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5bb21b1b-08c0-4d8a-9edd-eda3f6d9598b","timestampMs":1714553521362,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-01T08:52:01.377+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"66651ea1-ddd7-4463-a227-44f2c500a1c1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"150c59f6-5f2c-4f25-93e4-5fc7ab96dd2c","timestampMs":1714553521363,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.377+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-01T08:52:01.377+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5bb21b1b-08c0-4d8a-9edd-eda3f6d9598b","timestampMs":1714553521362,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-05-01T08:52:01.377+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-01T08:52:01.393+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f00943fb-d546-4b19-b00e-701bffd55885","timestampMs":1714553521292,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.397+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f00943fb-d546-4b19-b00e-701bffd55885","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"5bd08c8b-c07d-4d31-a700-f618654e9884","timestampMs":1714553521397,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.407+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f00943fb-d546-4b19-b00e-701bffd55885","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"5bd08c8b-c07d-4d31-a700-f618654e9884","timestampMs":1714553521397,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.409+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-01T08:52:01.440+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63129a6a-b8cb-4f5a-a61a-dec570283fbe","timestampMs":1714553521407,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.445+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63129a6a-b8cb-4f5a-a61a-dec570283fbe","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"13ab49de-b242-4612-9806-a95d6c5d01aa","timestampMs":1714553521445,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.452+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63129a6a-b8cb-4f5a-a61a-dec570283fbe","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"13ab49de-b242-4612-9806-a95d6c5d01aa","timestampMs":1714553521445,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-05-01T08:52:01.453+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-05-01T08:52:56.082+00:00|INFO|RequestLog|qtp739264372-26] 172.17.0.4 - policyadmin [01/May/2024:08:52:56 +0000] "GET /metrics HTTP/1.1" 200 10641 "-" "Prometheus/2.51.2" policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-05-01T08:51:08.839235816Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.982278ms grafana | logger=migrator t=2024-05-01T08:51:08.84331735Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.metadata.migration.min.batch.size = 200 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-05-01 08:51:07,160] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-05-01 08:51:07,160] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-05-01 08:51:07,162] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-05-01 08:51:07,165] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-05-01 08:51:07,192] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-05-01 08:51:07,197] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-05-01 08:51:07,209] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) kafka | [2024-05-01 08:51:07,211] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-05-01 08:51:07,212] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-05-01 08:51:07,222] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-05-01 08:51:07,267] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-05-01 08:51:07,287] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-05-01 08:51:07,301] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-05-01 08:51:07,341] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-05-01 08:51:07,709] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-05-01 08:51:07,731] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-05-01 08:51:07,732] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-05-01 08:51:07,745] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-api | [2024-05-01T08:51:21.528+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-05-01T08:51:21.618+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@26844abb policy-api | [2024-05-01T08:51:21.620+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-05-01T08:51:23.727+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-05-01T08:51:23.732+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-05-01T08:51:24.770+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-05-01T08:51:25.675+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-05-01T08:51:26.811+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-05-01T08:51:27.066+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@4fa650e1, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@54d8c998, org.springframework.security.web.context.SecurityContextHolderFilter@31f5829e, org.springframework.security.web.header.HeaderWriterFilter@2a384b46, org.springframework.security.web.authentication.logout.LogoutFilter@203f1447, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1c277413, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@72e6e93, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@32c29f7b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5da1f9b9, org.springframework.security.web.access.ExceptionTranslationFilter@4743220d, org.springframework.security.web.access.intercept.AuthorizationFilter@13a34a70] policy-api | [2024-05-01T08:51:27.940+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-05-01T08:51:28.059+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-05-01T08:51:28.084+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-05-01T08:51:28.100+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.686 seconds (process running for 12.334) policy-api | [2024-05-01T08:51:39.922+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-05-01T08:51:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-api | [2024-05-01T08:51:39.925+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms policy-api | [2024-05-01T08:51:43.536+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] grafana | logger=migrator t=2024-05-01T08:51:08.844353917Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.036268ms grafana | logger=migrator t=2024-05-01T08:51:08.849014262Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2024-05-01T08:51:08.855834766Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.819904ms grafana | logger=migrator t=2024-05-01T08:51:08.859948292Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2024-05-01T08:51:08.865965872Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.017331ms grafana | logger=migrator t=2024-05-01T08:51:08.869298855Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2024-05-01T08:51:08.869440723Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=144.078µs grafana | logger=migrator t=2024-05-01T08:51:08.879015937Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2024-05-01T08:51:08.881195897Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=2.17832ms grafana | logger=migrator t=2024-05-01T08:51:08.888784953Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2024-05-01T08:51:08.890071064Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.2862ms grafana | logger=migrator t=2024-05-01T08:51:08.893545534Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2024-05-01T08:51:08.8947344Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.194546ms grafana | logger=migrator t=2024-05-01T08:51:08.899227755Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-05-01T08:51:08.899371213Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=142.988µs grafana | logger=migrator t=2024-05-01T08:51:08.902530447Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2024-05-01T08:51:08.90877751Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.246462ms grafana | logger=migrator t=2024-05-01T08:51:08.913099626Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2024-05-01T08:51:08.91953761Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.437224ms grafana | logger=migrator t=2024-05-01T08:51:08.925945931Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2024-05-01T08:51:08.933064191Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.117729ms grafana | logger=migrator t=2024-05-01T08:51:08.936077407Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2024-05-01T08:51:08.942797785Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.719078ms grafana | logger=migrator t=2024-05-01T08:51:08.947227268Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2024-05-01T08:51:08.953561275Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.333467ms grafana | logger=migrator t=2024-05-01T08:51:08.960171967Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2024-05-01T08:51:08.960352837Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=184.86µs grafana | logger=migrator t=2024-05-01T08:51:08.963669059Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2024-05-01T08:51:08.964637133Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=962.833µs grafana | logger=migrator t=2024-05-01T08:51:08.972964709Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2024-05-01T08:51:08.983674137Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=10.711477ms grafana | logger=migrator t=2024-05-01T08:51:09.055365658Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2024-05-01T08:51:09.055548668Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=183.499µs grafana | logger=migrator t=2024-05-01T08:51:09.061988688Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2024-05-01T08:51:09.068130353Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.142216ms grafana | logger=migrator t=2024-05-01T08:51:09.072430899Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2024-05-01T08:51:09.073142097Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=711.267µs grafana | logger=migrator t=2024-05-01T08:51:09.076424851Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2024-05-01T08:51:09.084815004Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.394844ms grafana | logger=migrator t=2024-05-01T08:51:09.089133961Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-05-01T08:51:09.089780865Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=648.494µs grafana | logger=migrator t=2024-05-01T08:51:09.094351837Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2024-05-01T08:51:09.095335429Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=983.881µs grafana | logger=migrator t=2024-05-01T08:51:09.100647839Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2024-05-01T08:51:09.106949883Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.301394ms grafana | logger=migrator t=2024-05-01T08:51:09.113093577Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2024-05-01T08:51:09.113891459Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=796.812µs grafana | logger=migrator t=2024-05-01T08:51:09.119952939Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.2:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.6:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.9:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.10) policy-pap | policy-pap | [2024-05-01T08:51:30.818+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final policy-pap | [2024-05-01T08:51:30.876+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 31 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-05-01T08:51:30.877+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-05-01T08:51:32.745+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-05-01T08:51:32.836+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 82 ms. Found 7 JPA repository interfaces. policy-pap | [2024-05-01T08:51:33.271+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-05-01T08:51:33.271+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-05-01T08:51:33.865+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-05-01T08:51:33.873+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-05-01T08:51:33.875+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-05-01T08:51:33.876+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] policy-pap | [2024-05-01T08:51:33.969+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-05-01T08:51:33.970+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3027 ms policy-pap | [2024-05-01T08:51:34.364+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-05-01T08:51:34.413+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final kafka | [2024-05-01 08:51:07,753] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-05-01 08:51:07,789] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:07,791] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:07,792] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:07,794] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:07,795] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:07,813] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-05-01 08:51:07,815] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-05-01 08:51:07,848] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-05-01 08:51:07,880] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714553467864,1714553467864,1,0,0,72057610506076161,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-05-01 08:51:07,881] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-05-01 08:51:07,962] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-05-01 08:51:07,977] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:07,982] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:07,984] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:07,990] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-05-01 08:51:08,006] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,010] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:08,011] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,020] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-05-01 08:51:08,023] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:08,045] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-05-01 08:51:08,053] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-05-01 08:51:08,053] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-05-01 08:51:08,053] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,053] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-05-01 08:51:08,067] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,072] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,079] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,094] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,099] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-05-01 08:51:08,103] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,112] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-05-01 08:51:08,129] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-05-01 08:51:08,134] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,135] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,135] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,136] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-05-01 08:51:08,136] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,142] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-05-01T08:51:09.121772155Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.822187ms grafana | logger=migrator t=2024-05-01T08:51:09.12546941Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2024-05-01T08:51:09.126348486Z level=info msg="Migration successfully executed" id="create alert_image table" duration=878.776µs grafana | logger=migrator t=2024-05-01T08:51:09.13096809Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2024-05-01T08:51:09.131980014Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.011573ms grafana | logger=migrator t=2024-05-01T08:51:09.136955577Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2024-05-01T08:51:09.13702415Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=69.484µs grafana | logger=migrator t=2024-05-01T08:51:09.140573968Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2024-05-01T08:51:09.141897408Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.32213ms grafana | logger=migrator t=2024-05-01T08:51:09.14704141Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2024-05-01T08:51:09.149021004Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.989005ms grafana | logger=migrator t=2024-05-01T08:51:09.152310398Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-05-01T08:51:09.152696358Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-05-01T08:51:09.15595185Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2024-05-01T08:51:09.1563404Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=387.76µs grafana | logger=migrator t=2024-05-01T08:51:09.161146714Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2024-05-01T08:51:09.162745958Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.599494ms grafana | logger=migrator t=2024-05-01T08:51:09.166086975Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2024-05-01T08:51:09.174995246Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.90587ms grafana | logger=migrator t=2024-05-01T08:51:09.177955092Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2024-05-01T08:51:09.17925157Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.295188ms grafana | logger=migrator t=2024-05-01T08:51:09.183851713Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2024-05-01T08:51:09.184905518Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.053545ms grafana | logger=migrator t=2024-05-01T08:51:09.18795918Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2024-05-01T08:51:09.188783344Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=823.354µs grafana | logger=migrator t=2024-05-01T08:51:09.191678427Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2024-05-01T08:51:09.19269734Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.018073ms grafana | logger=migrator t=2024-05-01T08:51:09.197962798Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2024-05-01T08:51:09.199015944Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.052745ms grafana | logger=migrator t=2024-05-01T08:51:09.2017843Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2024-05-01T08:51:09.201815011Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=31.061µs grafana | logger=migrator t=2024-05-01T08:51:09.20633132Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2024-05-01T08:51:09.206397504Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=66.304µs grafana | logger=migrator t=2024-05-01T08:51:09.20916941Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2024-05-01T08:51:09.209469536Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=300.156µs grafana | logger=migrator t=2024-05-01T08:51:09.214493711Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2024-05-01T08:51:09.215543476Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.050075ms grafana | logger=migrator t=2024-05-01T08:51:09.2182528Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2024-05-01T08:51:09.219095125Z level=info msg="Migration successfully executed" id="create secrets table" duration=839.995µs grafana | logger=migrator t=2024-05-01T08:51:09.222133635Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2024-05-01T08:51:09.25215274Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=30.019174ms grafana | logger=migrator t=2024-05-01T08:51:09.257043558Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2024-05-01T08:51:09.264196186Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.147458ms grafana | logger=migrator t=2024-05-01T08:51:09.2671182Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2024-05-01T08:51:09.267266098Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=147.938µs grafana | logger=migrator t=2024-05-01T08:51:09.271213956Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2024-05-01T08:51:09.303177285Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.963209ms grafana | logger=migrator t=2024-05-01T08:51:09.306860349Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2024-05-01T08:51:09.339885243Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=33.022904ms grafana | logger=migrator t=2024-05-01T08:51:09.344370839Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-05-01T08:51:09.345091108Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=722.579µs grafana | logger=migrator t=2024-05-01T08:51:09.347926878Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2024-05-01T08:51:09.349049987Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.122399ms grafana | logger=migrator t=2024-05-01T08:51:09.353210166Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2024-05-01T08:51:09.353439259Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=229.123µs grafana | logger=migrator t=2024-05-01T08:51:09.36124007Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2024-05-01T08:51:09.363295809Z level=info msg="Migration successfully executed" id="create permission table" duration=2.062449ms grafana | logger=migrator t=2024-05-01T08:51:09.369356219Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2024-05-01T08:51:09.37127101Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.908841ms grafana | logger=migrator t=2024-05-01T08:51:09.376159869Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2024-05-01T08:51:09.378027467Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.867818ms grafana | logger=migrator t=2024-05-01T08:51:09.381519632Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2024-05-01T08:51:09.382455411Z level=info msg="Migration successfully executed" id="create role table" duration=935.519µs grafana | logger=migrator t=2024-05-01T08:51:09.386220519Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2024-05-01T08:51:09.395611516Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.390887ms grafana | logger=migrator t=2024-05-01T08:51:09.399770195Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2024-05-01T08:51:09.405958942Z level=info msg="Migration successfully executed" id="add column group_name" duration=6.191486ms grafana | logger=migrator t=2024-05-01T08:51:09.409529411Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2024-05-01T08:51:09.411398769Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.873379ms grafana | logger=migrator t=2024-05-01T08:51:09.41690825Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2024-05-01T08:51:09.417761755Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=853.775µs grafana | logger=migrator t=2024-05-01T08:51:09.420990156Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2024-05-01T08:51:09.42259302Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.601184ms grafana | logger=migrator t=2024-05-01T08:51:09.427603435Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2024-05-01T08:51:09.429053042Z level=info msg="Migration successfully executed" id="create team role table" duration=1.448767ms grafana | logger=migrator t=2024-05-01T08:51:09.43318673Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2024-05-01T08:51:09.434301998Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.115478ms grafana | logger=migrator t=2024-05-01T08:51:09.438001714Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2024-05-01T08:51:09.439241259Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.238965ms grafana | logger=migrator t=2024-05-01T08:51:09.444496337Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2024-05-01T08:51:09.445600765Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.100837ms grafana | logger=migrator t=2024-05-01T08:51:09.449459019Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2024-05-01T08:51:09.451047792Z level=info msg="Migration successfully executed" id="create user role table" duration=1.588353ms grafana | logger=migrator t=2024-05-01T08:51:09.457543366Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2024-05-01T08:51:09.458721038Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.177762ms grafana | logger=migrator t=2024-05-01T08:51:09.46349322Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2024-05-01T08:51:09.464665362Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.171532ms grafana | logger=migrator t=2024-05-01T08:51:09.46954413Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2024-05-01T08:51:09.471110763Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.565984ms grafana | logger=migrator t=2024-05-01T08:51:09.475251741Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2024-05-01T08:51:09.476760931Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.50805ms grafana | logger=migrator t=2024-05-01T08:51:09.480697839Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2024-05-01T08:51:09.482021648Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.32282ms grafana | logger=migrator t=2024-05-01T08:51:09.486701375Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2024-05-01T08:51:09.488408866Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.706061ms grafana | logger=migrator t=2024-05-01T08:51:09.492388156Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2024-05-01T08:51:09.502191174Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.803388ms grafana | logger=migrator t=2024-05-01T08:51:09.505878268Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2024-05-01T08:51:09.50705314Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.174702ms grafana | logger=migrator t=2024-05-01T08:51:09.511493385Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2024-05-01T08:51:09.51273287Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.235584ms grafana | logger=migrator t=2024-05-01T08:51:09.516373393Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2024-05-01T08:51:09.517505922Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.132519ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-05-01T08:51:09.520867129Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2024-05-01T08:51:09.522027241Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.160132ms grafana | logger=migrator t=2024-05-01T08:51:09.526732849Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2024-05-01T08:51:09.527631547Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=897.828µs grafana | logger=migrator t=2024-05-01T08:51:09.5312955Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2024-05-01T08:51:09.533296026Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=2.000286ms grafana | logger=migrator t=2024-05-01T08:51:09.537197452Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2024-05-01T08:51:09.545510691Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.312769ms grafana | logger=migrator t=2024-05-01T08:51:09.549430048Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2024-05-01T08:51:09.561270144Z level=info msg="Migration successfully executed" id="permission kind migration" duration=11.841176ms grafana | logger=migrator t=2024-05-01T08:51:09.565918418Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2024-05-01T08:51:09.574354024Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.434996ms grafana | logger=migrator t=2024-05-01T08:51:09.577985616Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2024-05-01T08:51:09.584102029Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.115923ms grafana | logger=migrator t=2024-05-01T08:51:09.58773202Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2024-05-01T08:51:09.589071832Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.31033ms grafana | logger=migrator t=2024-05-01T08:51:09.594690398Z level=info msg="Executing migration" id="add permission action scope role_id index" grafana | logger=migrator t=2024-05-01T08:51:09.597261554Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.569336ms grafana | logger=migrator t=2024-05-01T08:51:09.601795603Z level=info msg="Executing migration" id="remove permission role_id action scope index" grafana | logger=migrator t=2024-05-01T08:51:09.603138024Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.343321ms grafana | logger=migrator t=2024-05-01T08:51:09.611409331Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2024-05-01T08:51:09.612751192Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.341331ms grafana | logger=migrator t=2024-05-01T08:51:09.617677292Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2024-05-01T08:51:09.618944709Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.266456ms grafana | logger=migrator t=2024-05-01T08:51:09.622811453Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2024-05-01T08:51:09.622964871Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=153.048µs grafana | logger=migrator t=2024-05-01T08:51:09.626418114Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2024-05-01T08:51:09.626455025Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=37.662µs grafana | logger=migrator t=2024-05-01T08:51:09.630845378Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2024-05-01T08:51:09.631665271Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=815.952µs grafana | logger=migrator t=2024-05-01T08:51:09.635657081Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2024-05-01T08:51:09.636622223Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=965.912µs grafana | logger=migrator t=2024-05-01T08:51:09.641542812Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2024-05-01T08:51:09.643114025Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.570943ms grafana | logger=migrator t=2024-05-01T08:51:09.647764171Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2024-05-01T08:51:09.648033605Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=269.014µs grafana | logger=migrator t=2024-05-01T08:51:09.652333372Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2024-05-01T08:51:09.652850509Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=517.037µs grafana | logger=migrator t=2024-05-01T08:51:09.655765903Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2024-05-01T08:51:09.656711503Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=944.84µs grafana | logger=migrator t=2024-05-01T08:51:09.662433966Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2024-05-01T08:51:09.663681301Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.247346ms grafana | logger=migrator t=2024-05-01T08:51:09.666783875Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2024-05-01T08:51:09.677972586Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=11.18828ms grafana | logger=migrator t=2024-05-01T08:51:09.684373354Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2024-05-01T08:51:09.68448241Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=108.866µs grafana | logger=migrator t=2024-05-01T08:51:09.687865308Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2024-05-01T08:51:09.688929654Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.064956ms grafana | logger=migrator t=2024-05-01T08:51:09.694854797Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2024-05-01T08:51:09.696166257Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.31125ms grafana | logger=migrator t=2024-05-01T08:51:09.700732057Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2024-05-01T08:51:09.701979784Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.247727ms grafana | logger=migrator t=2024-05-01T08:51:09.705119809Z level=info msg="Executing migration" id="add correlation config column" policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- grafana | logger=migrator t=2024-05-01T08:51:09.713498072Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.377393ms grafana | logger=migrator t=2024-05-01T08:51:09.717485022Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2024-05-01T08:51:09.718356309Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=871.277µs grafana | logger=migrator t=2024-05-01T08:51:09.722483176Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" grafana | logger=migrator t=2024-05-01T08:51:09.72350031Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.016634ms grafana | logger=migrator t=2024-05-01T08:51:09.726978284Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2024-05-01T08:51:09.749614319Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.636435ms grafana | logger=migrator t=2024-05-01T08:51:09.753773128Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2024-05-01T08:51:09.754705958Z level=info msg="Migration successfully executed" id="create correlation v2" duration=932.21µs grafana | logger=migrator t=2024-05-01T08:51:09.757949999Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2024-05-01T08:51:09.758765022Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=814.933µs grafana | logger=migrator t=2024-05-01T08:51:09.762123019Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2024-05-01T08:51:09.764055441Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.932292ms grafana | logger=migrator t=2024-05-01T08:51:09.770674801Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2024-05-01T08:51:09.771820082Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.145341ms grafana | logger=migrator t=2024-05-01T08:51:09.778338726Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2024-05-01T08:51:09.778674543Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=335.607µs grafana | logger=migrator t=2024-05-01T08:51:09.781148024Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2024-05-01T08:51:09.78201323Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=864.216µs grafana | logger=migrator t=2024-05-01T08:51:09.786281545Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2024-05-01T08:51:09.797600243Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.320168ms grafana | logger=migrator t=2024-05-01T08:51:09.800917498Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2024-05-01T08:51:09.801645767Z level=info msg="Migration successfully executed" id="create entity_events table" duration=728.169µs grafana | logger=migrator t=2024-05-01T08:51:09.80531013Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2024-05-01T08:51:09.806592848Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.278547ms grafana | logger=migrator t=2024-05-01T08:51:09.817075731Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-05-01T08:51:09.817586618Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-05-01T08:51:09.821610181Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-05-01T08:51:09.822199342Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-05-01T08:51:09.825518887Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2024-05-01T08:51:09.826395763Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=876.106µs grafana | logger=migrator t=2024-05-01T08:51:09.830571364Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2024-05-01T08:51:09.831787088Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.215344ms grafana | logger=migrator t=2024-05-01T08:51:09.834977976Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-05-01T08:51:09.836208132Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.229796ms grafana | logger=migrator t=2024-05-01T08:51:09.83976959Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-05-01T08:51:09.841075609Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.305759ms grafana | logger=migrator t=2024-05-01T08:51:09.845615068Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-05-01T08:51:09.848370584Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.754976ms grafana | logger=migrator t=2024-05-01T08:51:09.855228416Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-05-01T08:51:09.856554816Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.32675ms grafana | logger=migrator t=2024-05-01T08:51:09.860764869Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2024-05-01T08:51:09.861645305Z level=info msg="Migration successfully executed" id="Drop public config table" duration=880.066µs grafana | logger=migrator t=2024-05-01T08:51:09.865131279Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2024-05-01T08:51:09.866336353Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.204544ms grafana | logger=migrator t=2024-05-01T08:51:09.870689202Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-05-01T08:51:09.871943359Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.254037ms grafana | logger=migrator t=2024-05-01T08:51:09.875501927Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-05-01T08:51:09.876965984Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.463816ms grafana | logger=migrator t=2024-05-01T08:51:09.881950497Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" zookeeper | ===> User zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper | ===> Configuring ... zookeeper | ===> Running preflight checks ... zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper | ===> Launching ... zookeeper | ===> Launching zookeeper ... zookeeper | [2024-05-01 08:51:03,797] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,805] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,805] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,805] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,805] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,806] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-05-01 08:51:03,806] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-05-01 08:51:03,807] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper | [2024-05-01 08:51:03,807] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper | [2024-05-01 08:51:03,808] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper | [2024-05-01 08:51:03,808] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,808] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,808] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,808] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,808] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper | [2024-05-01 08:51:03,809] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper | [2024-05-01 08:51:03,819] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3246fb96 (org.apache.zookeeper.server.ServerMetrics) zookeeper | [2024-05-01 08:51:03,822] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-05-01 08:51:03,822] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper | [2024-05-01 08:51:03,824] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper | [2024-05-01 08:51:03,833] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,833] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:host.name=8f0f1f14ae74 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | [2024-05-01T08:51:34.722+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-05-01T08:51:34.815+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 policy-pap | [2024-05-01T08:51:34.817+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-05-01T08:51:34.866+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect policy-pap | [2024-05-01T08:51:36.299+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] policy-pap | [2024-05-01T08:51:36.309+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-05-01T08:51:36.758+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-05-01T08:51:37.154+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-05-01T08:51:37.265+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-05-01T08:51:37.593+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = e55cdecf-bd7f-4245-8ff0-8ac852d4496f policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-05-01T08:51:09.883369012Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.417975ms grafana | logger=migrator t=2024-05-01T08:51:09.8914584Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2024-05-01T08:51:09.918141418Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=26.790854ms grafana | logger=migrator t=2024-05-01T08:51:10.002089171Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2024-05-01T08:51:10.010814541Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.723749ms grafana | logger=migrator t=2024-05-01T08:51:10.014328113Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2024-05-01T08:51:10.024707266Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=10.378642ms grafana | logger=migrator t=2024-05-01T08:51:10.030083662Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2024-05-01T08:51:10.030324925Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=238.923µs grafana | logger=migrator t=2024-05-01T08:51:10.034804562Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2024-05-01T08:51:10.043414466Z level=info msg="Migration successfully executed" id="add share column" duration=8.609444ms grafana | logger=migrator t=2024-05-01T08:51:10.046726358Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2024-05-01T08:51:10.046995043Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=268.605µs grafana | logger=migrator t=2024-05-01T08:51:10.055150602Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2024-05-01T08:51:10.05656238Z level=info msg="Migration successfully executed" id="create file table" duration=1.411358ms grafana | logger=migrator t=2024-05-01T08:51:10.061871912Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2024-05-01T08:51:10.063113061Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.240929ms grafana | logger=migrator t=2024-05-01T08:51:10.066579782Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2024-05-01T08:51:10.0678117Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.233969ms grafana | logger=migrator t=2024-05-01T08:51:10.071114872Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2024-05-01T08:51:10.072024961Z level=info msg="Migration successfully executed" id="create file_meta table" duration=907.709µs grafana | logger=migrator t=2024-05-01T08:51:10.078478837Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2024-05-01T08:51:10.079865703Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.387087ms grafana | logger=migrator t=2024-05-01T08:51:10.086269426Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2024-05-01T08:51:10.086393783Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=123.946µs grafana | logger=migrator t=2024-05-01T08:51:10.092046664Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2024-05-01T08:51:10.092167301Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=120.096µs grafana | logger=migrator t=2024-05-01T08:51:10.096178642Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-05-01T08:51:10.096818087Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=639.105µs grafana | logger=migrator t=2024-05-01T08:51:10.100007322Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-05-01T08:51:10.100337041Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=329.188µs grafana | logger=migrator t=2024-05-01T08:51:10.103668094Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2024-05-01T08:51:10.106128069Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.459085ms grafana | logger=migrator t=2024-05-01T08:51:10.109737228Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2024-05-01T08:51:10.119380909Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.643801ms grafana | logger=migrator t=2024-05-01T08:51:10.126125531Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2024-05-01T08:51:10.126580425Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=454.135µs grafana | logger=migrator t=2024-05-01T08:51:10.132902614Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2024-05-01T08:51:10.134192015Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.289351ms grafana | logger=migrator t=2024-05-01T08:51:10.138691243Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2024-05-01T08:51:10.139586362Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=905.15µs grafana | logger=migrator t=2024-05-01T08:51:10.144619469Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2024-05-01T08:51:10.145235603Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=614.234µs grafana | logger=migrator t=2024-05-01T08:51:10.148779728Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2024-05-01T08:51:10.149771233Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=990.205µs grafana | logger=migrator t=2024-05-01T08:51:10.15371434Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2024-05-01T08:51:10.164507005Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.793124ms grafana | logger=migrator t=2024-05-01T08:51:10.171364752Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-05-01T08:51:10.180640373Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.26554ms grafana | logger=migrator t=2024-05-01T08:51:10.184082933Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2024-05-01T08:51:10.185335051Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.251689ms grafana | logger=migrator t=2024-05-01T08:51:10.189612257Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2024-05-01T08:51:10.268986798Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=79.370261ms grafana | logger=migrator t=2024-05-01T08:51:10.274023266Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2024-05-01T08:51:10.275461635Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.43839ms grafana | logger=migrator t=2024-05-01T08:51:10.285285396Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-05-01T08:51:10.287731101Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.445434ms grafana | logger=migrator t=2024-05-01T08:51:10.292159975Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-05-01T08:51:10.318655984Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.494328ms grafana | logger=migrator t=2024-05-01T08:51:10.327172823Z level=info msg="Executing migration" id="add origin column to seed_assignment" grafana | logger=migrator t=2024-05-01T08:51:10.334285245Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.106691ms grafana | logger=migrator t=2024-05-01T08:51:10.339973078Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" grafana | logger=migrator t=2024-05-01T08:51:10.340365459Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=391.881µs grafana | logger=migrator t=2024-05-01T08:51:10.346238473Z level=info msg="Executing migration" id="prevent seeding OnCall access" grafana | logger=migrator t=2024-05-01T08:51:10.346485956Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=247.103µs grafana | logger=migrator t=2024-05-01T08:51:10.350281275Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2024-05-01T08:51:10.350870838Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=653.897µs grafana | logger=migrator t=2024-05-01T08:51:10.358152249Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" grafana | logger=migrator t=2024-05-01T08:51:10.358769402Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=616.874µs grafana | logger=migrator t=2024-05-01T08:51:10.363872454Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2024-05-01T08:51:10.364333049Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=459.916µs kafka | [2024-05-01 08:51:08,142] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,142] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,143] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-05-01 08:51:08,144] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,149] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-05-01 08:51:08,156] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-05-01 08:51:08,157] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-05-01 08:51:08,157] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-05-01 08:51:08,161] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-05-01 08:51:08,162] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-05-01 08:51:08,162] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-05-01 08:51:08,163] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-05-01 08:51:08,165] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-05-01 08:51:08,166] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-pap | metrics.num.samples = 2 zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.371146585Z level=info msg="Executing migration" id="create folder table" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-pap | metrics.recording.level = INFO zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.372729161Z level=info msg="Migration successfully executed" id="create folder table" duration=1.581567ms simulator | overriding logback.xml policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.376530091Z level=info msg="Executing migration" id="Add index for parent_uid" simulator | 2024-05-01 08:51:04,831 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.378668428Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.137357ms simulator | 2024-05-01 08:51:04,897 INFO org.onap.policy.models.simulators starting policy-db-migrator | policy-db-migrator | policy-pap | receive.buffer.bytes = 65536 zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.383374868Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" simulator | 2024-05-01 08:51:04,897 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.384879161Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.503563ms simulator | 2024-05-01 08:51:05,075 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- kafka | [2024-05-01 08:51:08,166] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.388059996Z level=info msg="Executing migration" id="Update folder title length" simulator | 2024-05-01 08:51:05,076 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-05-01 08:51:05,200 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | kafka | [2024-05-01 08:51:08,169] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.388090098Z level=info msg="Migration successfully executed" id="Update folder title length" duration=30.792µs simulator | 2024-05-01 08:51:05,210 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | -------------- kafka | [2024-05-01 08:51:08,172] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.391242041Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-05-01 08:51:08,172] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.393009579Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.766768ms policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- kafka | [2024-05-01 08:51:08,173] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.399906148Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | [2024-05-01 08:51:08,173] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.401101494Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.195665ms policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | kafka | [2024-05-01 08:51:08,173] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) zookeeper | [2024-05-01 08:51:03,834] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.405343998Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-05-01 08:51:08,174] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) zookeeper | [2024-05-01 08:51:03,834] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.406631218Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.28678ms policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- kafka | [2024-05-01 08:51:08,178] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) zookeeper | [2024-05-01 08:51:03,834] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.410060138Z level=info msg="Executing migration" id="Sync dashboard and folder table" policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. zookeeper | [2024-05-01 08:51:03,835] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.410564465Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=503.807µs policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) zookeeper | [2024-05-01 08:51:03,835] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-05-01T08:51:10.41482597Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) zookeeper | [2024-05-01 08:51:03,835] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- kafka | [2024-05-01 08:51:08,181] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) kafka | [2024-05-01 08:51:08,189] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) zookeeper | [2024-05-01 08:51:03,835] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-05-01 08:51:08,193] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:08,212] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) zookeeper | [2024-05-01 08:51:03,835] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-05-01 08:51:05,213 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- kafka | [2024-05-01 08:51:08,212] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-05-01 08:51:08,212] INFO Kafka startTimeMs: 1714553468205 (org.apache.kafka.common.utils.AppInfoParser) policy-pap | sasl.oauthbearer.expected.issuer = null zookeeper | [2024-05-01 08:51:03,835] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) simulator | 2024-05-01 08:51:05,218 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 policy-db-migrator | kafka | [2024-05-01 08:51:08,214] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-05-01 08:51:08,286] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 zookeeper | [2024-05-01 08:51:03,836] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-05-01 08:51:05,271 INFO Session workerName=node0 policy-db-migrator | kafka | [2024-05-01 08:51:08,347] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-05-01 08:51:08,358] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 zookeeper | [2024-05-01 08:51:03,836] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-05-01 08:51:05,882 INFO Using GSON for REST calls policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-05-01 08:51:08,358] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-05-01 08:51:13,195] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 zookeeper | [2024-05-01 08:51:03,837] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) simulator | 2024-05-01 08:51:05,977 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} policy-db-migrator | -------------- kafka | [2024-05-01 08:51:13,196] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:39,771] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null zookeeper | [2024-05-01 08:51:03,837] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) simulator | 2024-05-01 08:51:05,994 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-05-01 08:51:39,773] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-05-01 08:51:39,780] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.scope.claim.name = scope zookeeper | [2024-05-01 08:51:03,838] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) simulator | 2024-05-01 08:51:06,003 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1621ms policy-db-migrator | -------------- kafka | [2024-05-01 08:51:39,787] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:39,807] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(ctm0k7NMTIu_tGFXft5nrA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(JcqNatGCTIqk2TVHn8pksg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.sub.claim.name = sub zookeeper | [2024-05-01 08:51:03,838] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) simulator | 2024-05-01 08:51:06,004 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4209 ms. policy-db-migrator | kafka | [2024-05-01 08:51:39,808] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-05-01 08:51:39,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null zookeeper | [2024-05-01 08:51:03,838] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) simulator | 2024-05-01 08:51:06,009 INFO org.onap.policy.models.simulators starting SDNC simulator policy-db-migrator | -------------- kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | security.protocol = PLAINTEXT zookeeper | [2024-05-01 08:51:03,838] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) simulator | 2024-05-01 08:51:06,013 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 zookeeper | [2024-05-01 08:51:03,838] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-05-01T08:51:10.415167649Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=341.14µs grafana | logger=migrator t=2024-05-01T08:51:10.420219317Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" zookeeper | [2024-05-01 08:51:03,838] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-05-01T08:51:10.42136459Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.145164ms grafana | logger=migrator t=2024-05-01T08:51:10.424980939Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" zookeeper | [2024-05-01 08:51:03,840] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-05-01T08:51:10.426916416Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.934917ms grafana | logger=migrator t=2024-05-01T08:51:10.433492088Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" zookeeper | [2024-05-01 08:51:03,840] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-05-01T08:51:10.434865724Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.373246ms grafana | logger=migrator t=2024-05-01T08:51:10.441659148Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" zookeeper | [2024-05-01 08:51:03,841] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) policy-db-migrator | TRUNCATE TABLE sequence policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-05-01T08:51:10.443724962Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.069244ms grafana | logger=migrator t=2024-05-01T08:51:10.447386913Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" zookeeper | [2024-05-01 08:51:03,841] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-05-01T08:51:10.448525995Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.139172ms grafana | logger=migrator t=2024-05-01T08:51:10.452576939Z level=info msg="Executing migration" id="create anon_device table" zookeeper | [2024-05-01 08:51:03,841] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-05-01T08:51:10.453616316Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.038857ms grafana | logger=migrator t=2024-05-01T08:51:10.456680935Z level=info msg="Executing migration" id="add unique index anon_device.device_id" zookeeper | [2024-05-01 08:51:03,859] INFO Logging initialized @542ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null grafana | logger=migrator t=2024-05-01T08:51:10.458025739Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.344825ms grafana | logger=migrator t=2024-05-01T08:51:10.461369073Z level=info msg="Executing migration" id="add index anon_device.updated_at" zookeeper | [2024-05-01 08:51:03,934] WARN o.e.j.s.ServletContextHandler@311bf055{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-05-01T08:51:10.462824264Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.45201ms kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) zookeeper | [2024-05-01 08:51:03,934] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-05-01T08:51:10.467201684Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-05-01T08:51:10.468511817Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.310183ms zookeeper | [2024-05-01 08:51:03,951] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-05-01T08:51:10.47510489Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-05-01T08:51:10.477550714Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=2.443205ms zookeeper | [2024-05-01 08:51:03,976] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) policy-db-migrator | -------------- policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | grafana | logger=migrator t=2024-05-01T08:51:10.482113335Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-05-01T08:51:10.483504722Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.391827ms zookeeper | [2024-05-01 08:51:03,976] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) policy-db-migrator | policy-pap | [2024-05-01T08:51:37.757+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-01T08:51:37.757+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-05-01T08:51:10.486447644Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-05-01T08:51:10.486819065Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=373.06µs zookeeper | [2024-05-01 08:51:03,977] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) policy-db-migrator | -------------- policy-pap | [2024-05-01T08:51:37.757+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553497755 policy-pap | [2024-05-01T08:51:37.759+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-1, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-05-01T08:51:10.490700408Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-05-01T08:51:10.500812085Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.111667ms zookeeper | [2024-05-01 08:51:03,982] WARN ServletContext@o.e.j.s.ServletContextHandler@311bf055{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) policy-db-migrator | DROP TABLE pdpstatistics policy-pap | [2024-05-01T08:51:37.760+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: simulator | 2024-05-01 08:51:06,014 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-05-01T08:51:10.507059779Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-05-01T08:51:10.507826382Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=767.293µs zookeeper | [2024-05-01 08:51:03,991] INFO Started o.e.j.s.ServletContextHandler@311bf055{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | -------------- policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-05-01T08:51:10.51470405Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-05-01T08:51:10.516569733Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.863813ms zookeeper | [2024-05-01 08:51:04,005] INFO Started ServerConnector@6f53b8a{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) policy-db-migrator | policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-05-01T08:51:10.520403074Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" grafana | logger=migrator t=2024-05-01T08:51:10.52232185Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.905345ms zookeeper | [2024-05-01 08:51:04,005] INFO Started @687ms (org.eclipse.jetty.server.Server) policy-db-migrator | policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true grafana | logger=migrator t=2024-05-01T08:51:10.527539638Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-05-01T08:51:10.528720832Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.180965ms zookeeper | [2024-05-01 08:51:04,005] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 grafana | logger=migrator t=2024-05-01T08:51:10.532950496Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" grafana | logger=migrator t=2024-05-01T08:51:10.534272228Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.338753ms kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 zookeeper | [2024-05-01 08:51:04,010] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) grafana | logger=migrator t=2024-05-01T08:51:10.539912739Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true zookeeper | [2024-05-01 08:51:04,011] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) grafana | logger=migrator t=2024-05-01T08:51:10.541597562Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.683844ms kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 zookeeper | [2024-05-01 08:51:04,012] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) grafana | logger=migrator t=2024-05-01T08:51:10.547414262Z level=info msg="Executing migration" id="create sso_setting table" kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | fetch.max.wait.ms = 500 simulator | 2024-05-01 08:51:06,014 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING zookeeper | [2024-05-01 08:51:04,013] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) grafana | logger=migrator t=2024-05-01T08:51:10.548526543Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.111511ms kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | simulator | 2024-05-01 08:51:06,015 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 simulator | 2024-05-01 08:51:06,051 INFO Session workerName=node0 zookeeper | [2024-05-01 08:51:04,026] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) grafana | logger=migrator t=2024-05-01T08:51:10.553477806Z level=info msg="Executing migration" id="copy kvstore migration status to each org" kafka | [2024-05-01 08:51:39,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0120-statistics_sequence.sql simulator | 2024-05-01 08:51:06,118 INFO Using GSON for REST calls simulator | 2024-05-01 08:51:06,130 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} zookeeper | [2024-05-01 08:51:04,026] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) grafana | logger=migrator t=2024-05-01T08:51:10.554890513Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.413817ms kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- simulator | 2024-05-01 08:51:06,132 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-05-01 08:51:06,132 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1749ms zookeeper | [2024-05-01 08:51:04,028] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) grafana | logger=migrator t=2024-05-01T08:51:10.558414928Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | DROP TABLE statistics_sequence policy-pap | fetch.min.bytes = 1 zookeeper | [2024-05-01 08:51:04,028] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) simulator | 2024-05-01 08:51:06,132 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4882 ms. grafana | logger=migrator t=2024-05-01T08:51:10.55900439Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=590.512µs kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | group.id = policy-pap zookeeper | [2024-05-01 08:51:04,032] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) simulator | 2024-05-01 08:51:06,133 INFO org.onap.policy.models.simulators starting SO simulator grafana | logger=migrator t=2024-05-01T08:51:10.563527799Z level=info msg="Executing migration" id="alter kv_store.value to longtext" kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | group.instance.id = null zookeeper | [2024-05-01 08:51:04,032] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) simulator | 2024-05-01 08:51:06,137 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=migrator t=2024-05-01T08:51:10.563639876Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=111.108µs kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policyadmin: OK: upgrade (1300) policy-pap | heartbeat.interval.ms = 3000 zookeeper | [2024-05-01 08:51:04,034] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) simulator | 2024-05-01 08:51:06,138 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-05-01T08:51:10.566862263Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | name version policy-pap | interceptor.classes = [] zookeeper | [2024-05-01 08:51:04,035] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) simulator | 2024-05-01 08:51:06,138 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-05-01T08:51:10.57735006Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.487207ms kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policyadmin 1300 policy-pap | internal.leave.group.on.close = true zookeeper | [2024-05-01 08:51:04,035] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) simulator | 2024-05-01 08:51:06,139 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 grafana | logger=migrator t=2024-05-01T08:51:10.58571028Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | ID script operation from_version to_version tag success atTime policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false zookeeper | [2024-05-01 08:51:04,043] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) simulator | 2024-05-01 08:51:06,150 INFO Session workerName=node0 grafana | logger=migrator t=2024-05-01T08:51:10.59949664Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=13.784269ms kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 policy-pap | isolation.level = read_uncommitted zookeeper | [2024-05-01 08:51:04,043] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) simulator | 2024-05-01 08:51:06,262 INFO Using GSON for REST calls grafana | logger=migrator t=2024-05-01T08:51:10.604894897Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer zookeeper | [2024-05-01 08:51:04,055] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) simulator | 2024-05-01 08:51:06,278 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} grafana | logger=migrator t=2024-05-01T08:51:10.605159301Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=263.904µs policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 kafka | [2024-05-01 08:51:39,812] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 zookeeper | [2024-05-01 08:51:04,056] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) simulator | 2024-05-01 08:51:06,280 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} grafana | logger=migrator t=2024-05-01T08:51:10.609784786Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.912741491s policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.poll.interval.ms = 300000 zookeeper | [2024-05-01 08:51:05,127] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) simulator | 2024-05-01 08:51:06,281 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1898ms grafana | logger=sqlstore t=2024-05-01T08:51:10.622563821Z level=info msg="Created default admin" user=admin policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | max.poll.records = 500 simulator | 2024-05-01 08:51:06,281 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4857 ms. grafana | logger=sqlstore t=2024-05-01T08:51:10.622904479Z level=info msg="Created default organization" policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 simulator | 2024-05-01 08:51:06,282 INFO org.onap.policy.models.simulators starting VFC simulator grafana | logger=secrets t=2024-05-01T08:51:10.628169779Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metric.reporters = [] simulator | 2024-05-01 08:51:06,287 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START grafana | logger=plugin.store t=2024-05-01T08:51:10.660281767Z level=info msg="Loading plugins..." policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.num.samples = 2 simulator | 2024-05-01 08:51:06,288 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=local.finder t=2024-05-01T08:51:10.703448524Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:09 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 grafana | logger=plugin.store t=2024-05-01T08:51:10.703481796Z level=info msg="Plugins loaded" count=55 duration=43.201179ms policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) simulator | 2024-05-01 08:51:06,289 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=query_data t=2024-05-01T08:51:10.7062828Z level=info msg="Query Service initialization" policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) simulator | 2024-05-01 08:51:06,289 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 policy-pap | receive.buffer.bytes = 65536 grafana | logger=live.push_http t=2024-05-01T08:51:10.710776608Z level=info msg="Live Push Gateway initialization" policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) simulator | 2024-05-01 08:51:06,293 INFO Session workerName=node0 policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=ngalert.migration t=2024-05-01T08:51:10.715438585Z level=info msg=Starting policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) simulator | 2024-05-01 08:51:06,372 INFO Using GSON for REST calls policy-pap | reconnect.backoff.ms = 50 grafana | logger=ngalert.migration t=2024-05-01T08:51:10.71588758Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 kafka | [2024-05-01 08:51:39,813] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=ngalert.migration orgID=1 t=2024-05-01T08:51:10.716401848Z level=info msg="Migrating alerts for organisation" policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 simulator | 2024-05-01 08:51:06,380 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=ngalert.migration orgID=1 t=2024-05-01T08:51:10.717137488Z level=info msg="Alerts found to migrate" alerts=0 simulator | 2024-05-01 08:51:06,382 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=ngalert.migration t=2024-05-01T08:51:10.719174221Z level=info msg="Completed alerting migration" simulator | 2024-05-01 08:51:06,382 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2000ms kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=ngalert.state.manager t=2024-05-01T08:51:10.745542042Z level=info msg="Running in alternative execution of Error/NoData mode" simulator | 2024-05-01 08:51:06,382 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4906 ms. kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=infra.usagestats.collector t=2024-05-01T08:51:10.747374984Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 simulator | 2024-05-01 08:51:06,383 INFO org.onap.policy.models.simulators started kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=provisioning.datasources t=2024-05-01T08:51:10.749145061Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=provisioning.alerting t=2024-05-01T08:51:10.763952377Z level=info msg="starting to provision alerting" kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=provisioning.alerting t=2024-05-01T08:51:10.763982918Z level=info msg="finished to provision alerting" policy-pap | sasl.kerberos.service.name = null kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=grafanaStorageLogger t=2024-05-01T08:51:10.764318097Z level=info msg="Storage starting" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=http.server t=2024-05-01T08:51:10.768853347Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-05-01 08:51:39,814] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=ngalert.state.manager t=2024-05-01T08:51:10.769054148Z level=info msg="Warming state cache for startup" policy-pap | sasl.login.callback.handler.class = null kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=ngalert.multiorg.alertmanager t=2024-05-01T08:51:10.77038229Z level=info msg="Starting MultiOrg Alertmanager" policy-pap | sasl.login.class = null kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=sqlstore.transactions t=2024-05-01T08:51:10.787096201Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=sqlstore.transactions t=2024-05-01T08:51:10.800677839Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=grafana.update.checker t=2024-05-01T08:51:10.857656207Z level=info msg="Update check succeeded" duration=90.136623ms policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=plugins.update.checker t=2024-05-01T08:51:10.858337265Z level=info msg="Update check succeeded" duration=91.074906ms policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=ngalert.state.manager t=2024-05-01T08:51:10.862325554Z level=info msg="State cache has been initialized" states=0 duration=93.268936ms policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=ngalert.scheduler t=2024-05-01T08:51:10.862416159Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=ticker t=2024-05-01T08:51:10.862711185Z level=info msg=starting first_tick=2024-05-01T08:51:20Z policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:10 grafana | logger=provisioning.dashboard t=2024-05-01T08:51:10.863428035Z level=info msg="starting to provision dashboards" policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-05-01 08:51:39,815] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 grafana | logger=sqlstore.transactions t=2024-05-01T08:51:10.955451992Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-pap | sasl.mechanism = GSSAPI kafka | [2024-05-01 08:51:39,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 grafana | logger=sqlstore.transactions t=2024-05-01T08:51:10.986176944Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-05-01 08:51:39,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 grafana | logger=grafana-apiserver t=2024-05-01T08:51:11.12363673Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-05-01 08:51:39,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 grafana | logger=grafana-apiserver t=2024-05-01T08:51:11.124309028Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-05-01 08:51:39,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 grafana | logger=provisioning.dashboard t=2024-05-01T08:51:11.178551558Z level=info msg="finished to provision dashboards" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-05-01 08:51:39,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 grafana | logger=infra.usagestats t=2024-05-01T08:53:01.773653096Z level=info msg="Usage stats are ready to report" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-05-01 08:51:39,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 kafka | [2024-05-01 08:51:39,820] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | security.protocol = PLAINTEXT kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | security.providers = null kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | send.buffer.bytes = 131072 kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | session.timeout.ms = 45000 kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | ssl.cipher.suites = null kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | ssl.engine.factory.class = null kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | ssl.key.password = null kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-05-01 08:51:39,821] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | ssl.keystore.key = null kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:11 policy-pap | ssl.keystore.location = null kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.keystore.password = null kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.keystore.type = JKS kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.provider = null kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.secure.random.implementation = null kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.truststore.certificates = null kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.truststore.location = null kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.truststore.password = null kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | ssl.truststore.type = JKS kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | kafka | [2024-05-01 08:51:39,822] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | [2024-05-01T08:51:37.766+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | [2024-05-01T08:51:37.766+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | [2024-05-01T08:51:37.766+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553497766 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | [2024-05-01T08:51:37.766+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 policy-pap | [2024-05-01T08:51:38.067+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:38.220+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:38.424+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@3d7caf9c, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4e26040f, org.springframework.security.web.context.SecurityContextHolderFilter@60b4d934, org.springframework.security.web.header.HeaderWriterFilter@2435c6ae, org.springframework.security.web.authentication.logout.LogoutFilter@6f4f2cc0, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@6a3a56de, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@41abee65, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@297dff3a, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@1782896, org.springframework.security.web.access.ExceptionTranslationFilter@6b630d4b, org.springframework.security.web.access.intercept.AuthorizationFilter@7cf66cf9] policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:39.132+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:39.236+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:39.260+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:39.277+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:12 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:39.278+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:39.279+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | [2024-05-01T08:51:39.279+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | [2024-05-01T08:51:39.279+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher kafka | [2024-05-01 08:51:39,823] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | [2024-05-01T08:51:39.280+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher kafka | [2024-05-01 08:51:39,824] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | [2024-05-01T08:51:39.280+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher kafka | [2024-05-01 08:51:39,977] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | [2024-05-01T08:51:39.282+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e55cdecf-bd7f-4245-8ff0-8ac852d4496f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6f69b0ba kafka | [2024-05-01 08:51:39,977] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | [2024-05-01T08:51:39.292+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e55cdecf-bd7f-4245-8ff0-8ac852d4496f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-05-01 08:51:39,977] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | [2024-05-01T08:51:39.293+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-05-01 08:51:39,977] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | allow.auto.create.topics = true kafka | [2024-05-01 08:51:39,977] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-05-01 08:51:39,977] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | auto.include.jmx.reporter = true kafka | [2024-05-01 08:51:39,977] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 policy-pap | auto.offset.reset = latest kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0105240851090800u 1 2024-05-01 08:51:13 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | check.crcs = true policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:13 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:13 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.id = consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | client.rack = policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | enable.auto.commit = true policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | exclude.internal.topics = true policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,978] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | fetch.min.bytes = 1 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | group.id = e55cdecf-bd7f-4245-8ff0-8ac852d4496f policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | group.instance.id = null policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0105240851090900u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 policy-pap | interceptor.classes = [] kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 policy-pap | internal.leave.group.on.close = true kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 policy-pap | isolation.level = read_uncommitted kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 policy-pap | max.poll.interval.ms = 300000 kafka | [2024-05-01 08:51:39,979] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 policy-pap | max.poll.records = 500 kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0105240851091000u 1 2024-05-01 08:51:14 policy-pap | metadata.max.age.ms = 300000 kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0105240851091100u 1 2024-05-01 08:51:15 policy-pap | metric.reporters = [] kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0105240851091200u 1 2024-05-01 08:51:15 policy-pap | metrics.num.samples = 2 kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0105240851091200u 1 2024-05-01 08:51:15 policy-pap | metrics.recording.level = INFO kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0105240851091200u 1 2024-05-01 08:51:15 policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0105240851091200u 1 2024-05-01 08:51:15 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0105240851091300u 1 2024-05-01 08:51:15 policy-pap | receive.buffer.bytes = 65536 kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0105240851091300u 1 2024-05-01 08:51:15 policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0105240851091300u 1 2024-05-01 08:51:15 policy-pap | reconnect.backoff.ms = 50 kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policyadmin: OK @ 1300 policy-pap | request.timeout.ms = 30000 kafka | [2024-05-01 08:51:39,980] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | retry.backoff.ms = 100 kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.client.callback.handler.class = null kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.jaas.config = null kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.service.name = null kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.callback.handler.class = null kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.class = null kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-05-01 08:51:39,981] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-05-01 08:51:39,983] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-05-01 08:51:39,983] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-05-01 08:51:39,983] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-05-01 08:51:39,984] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | security.protocol = PLAINTEXT kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-pap | security.providers = null kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-pap | send.buffer.bytes = 131072 kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-pap | session.timeout.ms = 45000 kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | ssl.cipher.suites = null kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-pap | ssl.engine.factory.class = null kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-pap | ssl.key.password = null kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | ssl.keystore.key = null kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | ssl.keystore.location = null kafka | [2024-05-01 08:51:39,985] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | ssl.keystore.password = null kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | ssl.keystore.type = JKS kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | ssl.truststore.certificates = null kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | ssl.truststore.location = null kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | ssl.truststore.password = null kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | ssl.truststore.type = JKS kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-05-01T08:51:39.299+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-05-01T08:51:39.299+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-05-01 08:51:39,986] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-05-01T08:51:39.299+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553499299 kafka | [2024-05-01 08:51:39,987] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-05-01T08:51:39.299+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Subscribed to topic(s): policy-pdp-pap kafka | [2024-05-01 08:51:39,987] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-05-01T08:51:39.300+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher kafka | [2024-05-01 08:51:39,987] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-05-01T08:51:39.300+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=887a3d21-9f05-4992-86d2-85176763dfe7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1cb929a9 kafka | [2024-05-01 08:51:39,987] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-05-01T08:51:39.300+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=887a3d21-9f05-4992-86d2-85176763dfe7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-05-01 08:51:39,987] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-05-01T08:51:39.301+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-05-01 08:51:39,988] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | allow.auto.create.topics = true kafka | [2024-05-01 08:51:39,997] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-05-01 08:51:39,999] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.include.jmx.reporter = true kafka | [2024-05-01 08:51:39,999] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | auto.offset.reset = latest kafka | [2024-05-01 08:51:40,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-05-01 08:51:40,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | check.crcs = true kafka | [2024-05-01 08:51:40,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-05-01 08:51:40,003] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.id = consumer-policy-pap-4 kafka | [2024-05-01 08:51:40,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | client.rack = kafka | [2024-05-01 08:51:40,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 kafka | [2024-05-01 08:51:40,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | enable.auto.commit = true kafka | [2024-05-01 08:51:40,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | exclude.internal.topics = true kafka | [2024-05-01 08:51:40,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | fetch.max.bytes = 52428800 kafka | [2024-05-01 08:51:40,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | fetch.max.wait.ms = 500 kafka | [2024-05-01 08:51:40,004] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | fetch.min.bytes = 1 kafka | [2024-05-01 08:51:40,005] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | group.id = policy-pap kafka | [2024-05-01 08:51:40,005] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | group.instance.id = null kafka | [2024-05-01 08:51:40,005] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-05-01 08:51:40,005] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | interceptor.classes = [] kafka | [2024-05-01 08:51:40,005] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-pap | internal.leave.group.on.close = true kafka | [2024-05-01 08:51:40,005] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-05-01 08:51:40,005] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-pap | isolation.level = read_uncommitted kafka | [2024-05-01 08:51:40,005] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-05-01 08:51:40,006] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-05-01 08:51:40,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.poll.interval.ms = 300000 kafka | [2024-05-01 08:51:40,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.poll.records = 500 kafka | [2024-05-01 08:51:40,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metadata.max.age.ms = 300000 kafka | [2024-05-01 08:51:40,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metric.reporters = [] kafka | [2024-05-01 08:51:40,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.num.samples = 2 kafka | [2024-05-01 08:51:40,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.recording.level = INFO kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-pap | receive.buffer.bytes = 65536 kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-pap | reconnect.backoff.ms = 50 kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-pap | request.timeout.ms = 30000 kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-pap | retry.backoff.ms = 100 kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.client.callback.handler.class = null kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.jaas.config = null kafka | [2024-05-01 08:51:40,013] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.service.name = null kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.callback.handler.class = null kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.class = null kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-05-01 08:51:40,014] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-05-01 08:51:40,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-05-01 08:51:40,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-05-01 08:51:40,015] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-05-01 08:51:40,015] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | sasl.mechanism = GSSAPI kafka | [2024-05-01 08:51:40,019] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.protocol = PLAINTEXT kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.providers = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | send.buffer.bytes = 131072 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | session.timeout.ms = 45000 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.cipher.suites = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.engine.factory.class = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.key = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.location = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.password = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.type = JKS kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.provider = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.secure.random.implementation = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.truststore.certificates = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.truststore.location = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.truststore.password = null kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.truststore.type = JKS kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-05-01T08:51:39.306+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-05-01T08:51:39.307+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-05-01T08:51:39.307+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553499306 kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-05-01T08:51:39.307+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-05-01T08:51:39.308+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-05-01 08:51:40,020] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,021] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-05-01 08:51:40,057] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-05-01 08:51:40,058] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-05-01T08:51:39.308+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=887a3d21-9f05-4992-86d2-85176763dfe7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-05-01T08:51:39.308+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=e55cdecf-bd7f-4245-8ff0-8ac852d4496f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-05-01T08:51:39.308+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=eb817357-d9e7-4ef6-b779-8e01114be173, alive=false, publisher=null]]: starting policy-pap | [2024-05-01T08:51:39.327+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-05-01T08:51:39.338+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-05-01T08:51:39.353+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-01T08:51:39.353+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-05-01T08:51:39.353+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553499353 policy-pap | [2024-05-01T08:51:39.353+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=eb817357-d9e7-4ef6-b779-8e01114be173, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-05-01T08:51:39.353+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=00e62b8b-029c-47d8-80c8-2c8d26f30705, alive=false, publisher=null]]: starting policy-pap | [2024-05-01T08:51:39.354+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-05-01 08:51:40,059] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-05-01 08:51:40,060] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-05-01 08:51:40,061] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-05-01 08:51:40,062] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-05-01 08:51:40,062] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-05-01 08:51:40,063] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-05-01 08:51:40,063] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2024-05-01 08:51:40,144] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,155] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,157] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,158] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,160] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,179] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,180] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,180] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,180] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,180] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,195] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,195] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,195] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,196] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,196] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,216] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,216] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,217] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,217] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,217] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,233] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,234] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,234] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,234] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,234] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,282] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,282] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,283] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,283] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,283] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,293] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-05-01T08:51:39.354+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-05-01T08:51:39.357+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | [2024-05-01T08:51:39.357+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | [2024-05-01T08:51:39.357+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714553499357 policy-pap | [2024-05-01T08:51:39.358+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=00e62b8b-029c-47d8-80c8-2c8d26f30705, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-05-01T08:51:39.358+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-05-01T08:51:39.358+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-05-01T08:51:39.363+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-05-01T08:51:39.364+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-05-01T08:51:39.369+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-05-01T08:51:39.369+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-05-01T08:51:39.369+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-05-01T08:51:39.369+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-05-01T08:51:39.370+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer kafka | [2024-05-01 08:51:40,295] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:39.371+00:00|INFO|ServiceManager|main] Policy PAP started kafka | [2024-05-01 08:51:40,295] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:39.371+00:00|INFO|TimerManager|Thread-10] timer manager state-change started kafka | [2024-05-01 08:51:40,295] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:39.384+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.269 seconds (process running for 9.886) kafka | [2024-05-01 08:51:40,295] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:39.746+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: sZdrrRZqSOecyf1-XTESVg kafka | [2024-05-01 08:51:40,303] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:39.748+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-05-01 08:51:40,304] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:39.748+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: sZdrrRZqSOecyf1-XTESVg kafka | [2024-05-01 08:51:40,304] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:39.749+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: sZdrrRZqSOecyf1-XTESVg kafka | [2024-05-01 08:51:40,304] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:39.796+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,304] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:39.796+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Cluster ID: sZdrrRZqSOecyf1-XTESVg kafka | [2024-05-01 08:51:40,310] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:39.866+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 kafka | [2024-05-01 08:51:40,310] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:39.866+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 kafka | [2024-05-01 08:51:40,310] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:39.882+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,310] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:39.940+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,310] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:40.003+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,322] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:40.060+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,324] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:40.116+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,324] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,324] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:40.166+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,324] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:40.230+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,333] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:40.272+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,334] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:40.337+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,334] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:40.379+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,334] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:40.451+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,334] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:40.485+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,344] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:40.556+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,345] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:40.589+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,345] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:40.660+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,345] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:40.696+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-05-01 08:51:40,345] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:40.770+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-05-01 08:51:40,354] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:40.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-05-01 08:51:40,354] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:40.801+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-05-01 08:51:40,355] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:40.802+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] (Re-)joining group kafka | [2024-05-01 08:51:40,355] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:40.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Request joining group due to: need to re-join with the given member-id: consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3-b7d212f5-01d6-47d7-aed1-e1b63c70da80 kafka | [2024-05-01 08:51:40,355] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:40.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-19ea082b-9ba1-4d70-9565-10711f7484b9 kafka | [2024-05-01 08:51:40,361] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:40.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-05-01T08:51:40.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] (Re-)joining group kafka | [2024-05-01 08:51:40,362] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:40.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-05-01 08:51:40,362] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:40.827+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-05-01 08:51:40,362] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:41.593+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' kafka | [2024-05-01 08:51:40,362] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:41.593+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' kafka | [2024-05-01 08:51:40,368] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:41.596+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms kafka | [2024-05-01 08:51:40,369] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:43.852+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3-b7d212f5-01d6-47d7-aed1-e1b63c70da80', protocol='range'} kafka | [2024-05-01 08:51:40,369] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:43.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-19ea082b-9ba1-4d70-9565-10711f7484b9', protocol='range'} kafka | [2024-05-01 08:51:40,369] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:43.870+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Finished assignment for group at generation 1: {consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3-b7d212f5-01d6-47d7-aed1-e1b63c70da80=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-05-01 08:51:40,369] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:43.870+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-19ea082b-9ba1-4d70-9565-10711f7484b9=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-05-01 08:51:40,376] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:43.897+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3-b7d212f5-01d6-47d7-aed1-e1b63c70da80', protocol='range'} kafka | [2024-05-01 08:51:40,377] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:43.898+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-05-01 08:51:40,377] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:43.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-19ea082b-9ba1-4d70-9565-10711f7484b9', protocol='range'} kafka | [2024-05-01 08:51:40,377] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:43.899+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-05-01 08:51:40,377] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:43.904+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-05-01 08:51:40,390] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:51:43.904+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-05-01 08:51:40,390] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:51:43.941+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-05-01 08:51:40,390] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:43.941+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-05-01 08:51:40,390] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:51:43.971+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3, groupId=e55cdecf-bd7f-4245-8ff0-8ac852d4496f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-05-01 08:51:40,391] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:51:43.971+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | [2024-05-01 08:51:40,397] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.220+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: kafka | [2024-05-01 08:51:40,398] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [] kafka | [2024-05-01 08:51:40,398] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.221+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-05-01 08:51:40,398] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1e6bf0b7-d9d0-4154-a5cb-b1f23bc870f0","timestampMs":1714553521189,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} kafka | [2024-05-01 08:51:40,398] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.221+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,405] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1e6bf0b7-d9d0-4154-a5cb-b1f23bc870f0","timestampMs":1714553521189,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} kafka | [2024-05-01 08:51:40,405] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.230+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-05-01 08:51:40,405] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.310+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate starting kafka | [2024-05-01 08:51:40,406] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.310+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate starting listener kafka | [2024-05-01 08:51:40,406] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.311+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate starting timer kafka | [2024-05-01 08:51:40,412] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.311+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=66651ea1-ddd7-4463-a227-44f2c500a1c1, expireMs=1714553551311] kafka | [2024-05-01 08:51:40,413] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.313+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate starting enqueue kafka | [2024-05-01 08:51:40,413] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.313+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=66651ea1-ddd7-4463-a227-44f2c500a1c1, expireMs=1714553551311] kafka | [2024-05-01 08:51:40,413] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.313+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate started kafka | [2024-05-01 08:51:40,413] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.315+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,419] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"66651ea1-ddd7-4463-a227-44f2c500a1c1","timestampMs":1714553521291,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,420] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.347+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,420] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"66651ea1-ddd7-4463-a227-44f2c500a1c1","timestampMs":1714553521291,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,420] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.348+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-05-01 08:51:40,420] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.352+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-05-01 08:51:40,427] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"66651ea1-ddd7-4463-a227-44f2c500a1c1","timestampMs":1714553521291,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,427] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.352+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-05-01 08:51:40,427] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.375+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-05-01 08:51:40,427] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"66651ea1-ddd7-4463-a227-44f2c500a1c1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"150c59f6-5f2c-4f25-93e4-5fc7ab96dd2c","timestampMs":1714553521363,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,428] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.376+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 66651ea1-ddd7-4463-a227-44f2c500a1c1 kafka | [2024-05-01 08:51:40,433] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.376+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-05-01 08:51:40,433] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5bb21b1b-08c0-4d8a-9edd-eda3f6d9598b","timestampMs":1714553521362,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} kafka | [2024-05-01 08:51:40,433] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.375+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,433] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"66651ea1-ddd7-4463-a227-44f2c500a1c1","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"150c59f6-5f2c-4f25-93e4-5fc7ab96dd2c","timestampMs":1714553521363,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,434] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.377+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopping kafka | [2024-05-01 08:51:40,443] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.378+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopping enqueue kafka | [2024-05-01 08:51:40,444] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.378+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopping timer kafka | [2024-05-01 08:51:40,444] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.378+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=66651ea1-ddd7-4463-a227-44f2c500a1c1, expireMs=1714553551311] kafka | [2024-05-01 08:51:40,444] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.378+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopping listener kafka | [2024-05-01 08:51:40,445] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.378+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopped kafka | [2024-05-01 08:51:40,453] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.383+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate successful kafka | [2024-05-01 08:51:40,454] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.383+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a start publishing next request kafka | [2024-05-01 08:51:40,454] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange starting kafka | [2024-05-01 08:51:40,455] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.383+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange starting listener kafka | [2024-05-01 08:51:40,455] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange starting timer kafka | [2024-05-01 08:51:40,465] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.384+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=f00943fb-d546-4b19-b00e-701bffd55885, expireMs=1714553551384] kafka | [2024-05-01 08:51:40,465] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.384+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=f00943fb-d546-4b19-b00e-701bffd55885, expireMs=1714553551384] kafka | [2024-05-01 08:51:40,465] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange starting enqueue kafka | [2024-05-01 08:51:40,465] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.384+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange started kafka | [2024-05-01 08:51:40,466] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.385+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,475] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f00943fb-d546-4b19-b00e-701bffd55885","timestampMs":1714553521292,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,476] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.416+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,477] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5bb21b1b-08c0-4d8a-9edd-eda3f6d9598b","timestampMs":1714553521362,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup"} kafka | [2024-05-01 08:51:40,477] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.416+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-05-01 08:51:40,477] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.420+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f00943fb-d546-4b19-b00e-701bffd55885","timestampMs":1714553521292,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-01T08:52:01.421+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-05-01 08:51:40,487] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.421+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,489] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f00943fb-d546-4b19-b00e-701bffd55885","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"5bd08c8b-c07d-4d31-a700-f618654e9884","timestampMs":1714553521397,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,489] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.431+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange stopping kafka | [2024-05-01 08:51:40,489] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.431+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange stopping enqueue kafka | [2024-05-01 08:51:40,489] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.431+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange stopping timer kafka | [2024-05-01 08:51:40,498] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.431+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=f00943fb-d546-4b19-b00e-701bffd55885, expireMs=1714553551384] kafka | [2024-05-01 08:51:40,499] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.431+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange stopping listener kafka | [2024-05-01 08:51:40,499] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.431+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange stopped kafka | [2024-05-01 08:51:40,499] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.431+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpStateChange successful kafka | [2024-05-01 08:51:40,499] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.432+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a start publishing next request kafka | [2024-05-01 08:51:40,509] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.432+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate starting kafka | [2024-05-01 08:51:40,509] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.432+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate starting listener kafka | [2024-05-01 08:51:40,510] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.432+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate starting timer kafka | [2024-05-01 08:51:40,510] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.432+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=63129a6a-b8cb-4f5a-a61a-dec570283fbe, expireMs=1714553551432] kafka | [2024-05-01 08:51:40,511] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.432+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate starting enqueue kafka | [2024-05-01 08:51:40,518] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.432+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate started kafka | [2024-05-01 08:51:40,518] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.432+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,518] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63129a6a-b8cb-4f5a-a61a-dec570283fbe","timestampMs":1714553521407,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,518] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.435+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-05-01 08:51:40,519] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f00943fb-d546-4b19-b00e-701bffd55885","timestampMs":1714553521292,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,528] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.435+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE kafka | [2024-05-01 08:51:40,529] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.442+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-05-01 08:51:40,529] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63129a6a-b8cb-4f5a-a61a-dec570283fbe","timestampMs":1714553521407,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,530] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,530] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.442+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-05-01 08:51:40,538] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.445+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-05-01 08:51:40,539] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f00943fb-d546-4b19-b00e-701bffd55885","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"5bd08c8b-c07d-4d31-a700-f618654e9884","timestampMs":1714553521397,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,539] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.445+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f00943fb-d546-4b19-b00e-701bffd55885 kafka | [2024-05-01 08:51:40,539] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.451+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-05-01 08:51:40,539] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | {"source":"pap-b080b72b-cd54-41ec-8ce5-cde21d44cf94","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"63129a6a-b8cb-4f5a-a61a-dec570283fbe","timestampMs":1714553521407,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,546] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.451+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-05-01 08:51:40,546] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.455+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-05-01 08:51:40,546] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63129a6a-b8cb-4f5a-a61a-dec570283fbe","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"13ab49de-b242-4612-9806-a95d6c5d01aa","timestampMs":1714553521445,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-05-01 08:51:40,546] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.455+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 63129a6a-b8cb-4f5a-a61a-dec570283fbe kafka | [2024-05-01 08:51:40,546] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.457+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"63129a6a-b8cb-4f5a-a61a-dec570283fbe","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"13ab49de-b242-4612-9806-a95d6c5d01aa","timestampMs":1714553521445,"name":"apex-d62bfb61-d94e-474e-a74e-302109ffaa0a","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-05-01T08:52:01.458+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopping kafka | [2024-05-01 08:51:40,554] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.458+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopping enqueue kafka | [2024-05-01 08:51:40,555] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.458+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopping timer kafka | [2024-05-01 08:51:40,555] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.458+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=63129a6a-b8cb-4f5a-a61a-dec570283fbe, expireMs=1714553551432] kafka | [2024-05-01 08:51:40,555] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:01.458+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopping listener kafka | [2024-05-01 08:51:40,555] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(ctm0k7NMTIu_tGFXft5nrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:01.458+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate stopped kafka | [2024-05-01 08:51:40,564] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:01.462+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a PdpUpdate successful kafka | [2024-05-01 08:51:40,565] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:01.462+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-d62bfb61-d94e-474e-a74e-302109ffaa0a has no more requests kafka | [2024-05-01 08:51:40,565] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:05.346+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. kafka | [2024-05-01 08:51:40,565] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:05.391+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-05-01 08:51:40,565] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:05.398+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-05-01 08:51:40,572] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:05.402+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-05-01 08:51:40,572] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:05.813+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup kafka | [2024-05-01 08:51:40,572] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:06.313+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup kafka | [2024-05-01 08:51:40,573] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:06.314+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup kafka | [2024-05-01 08:51:40,573] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:06.825+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup kafka | [2024-05-01 08:51:40,580] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:07.017+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-05-01 08:51:40,580] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:07.095+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-05-01 08:51:40,580] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:07.096+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group testGroup kafka | [2024-05-01 08:51:40,580] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:07.096+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group testGroup kafka | [2024-05-01 08:51:40,581] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:07.240+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-05-01T08:52:07Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-05-01T08:52:07Z, user=policyadmin)] kafka | [2024-05-01 08:51:40,588] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:07.879+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup kafka | [2024-05-01 08:51:40,589] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:07.880+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-05-01 08:51:40,589] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:07.880+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 kafka | [2024-05-01 08:51:40,589] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:07.880+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup kafka | [2024-05-01 08:51:40,589] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:07.881+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup kafka | [2024-05-01 08:51:40,595] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:07.890+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-05-01T08:52:07Z, user=policyadmin)] kafka | [2024-05-01 08:51:40,596] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:08.268+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup kafka | [2024-05-01 08:51:40,596] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:08.268+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup kafka | [2024-05-01 08:51:40,596] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:08.268+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 kafka | [2024-05-01 08:51:40,596] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:08.268+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-05-01 08:51:40,607] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-05-01T08:52:08.268+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup kafka | [2024-05-01 08:51:40,607] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-05-01T08:52:08.268+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup kafka | [2024-05-01 08:51:40,607] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-pap | [2024-05-01T08:52:08.279+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-05-01T08:52:08Z, user=policyadmin)] kafka | [2024-05-01 08:51:40,607] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,607] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | [2024-05-01T08:52:28.849+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup policy-pap | [2024-05-01T08:52:28.851+00:00|INFO|SessionData|http-nio-6969-exec-2] deleting DB group testGroup policy-pap | [2024-05-01T08:52:31.311+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=66651ea1-ddd7-4463-a227-44f2c500a1c1, expireMs=1714553551311] policy-pap | [2024-05-01T08:52:31.384+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=f00943fb-d546-4b19-b00e-701bffd55885, expireMs=1714553551384] kafka | [2024-05-01 08:51:40,616] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,617] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,617] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,617] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,617] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,625] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,625] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,625] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,625] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,626] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,633] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,633] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,634] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,634] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,634] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,639] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,640] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,640] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,640] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,640] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,646] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,646] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,646] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,646] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,646] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,654] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,655] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,655] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,655] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,655] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,663] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,664] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,664] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,665] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,665] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,673] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,674] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,674] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,674] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,674] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,683] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,683] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,683] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,683] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,683] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,689] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-05-01 08:51:40,689] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-05-01 08:51:40,689] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,689] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-05-01 08:51:40,689] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(JcqNatGCTIqk2TVHn8pksg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-05-01 08:51:40,697] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-05-01 08:51:40,698] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-05-01 08:51:40,706] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,707] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,709] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,709] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,710] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,710] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,713] INFO [Broker id=1] Finished LeaderAndIsr request in 709ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-05-01 08:51:40,714] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,716] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,717] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,718] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,718] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,719] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,720] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,720] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=JcqNatGCTIqk2TVHn8pksg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=ctm0k7NMTIu_tGFXft5nrA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-05-01 08:51:40,720] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,720] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,720] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,720] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,720] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,720] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,720] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,721] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,721] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,721] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,721] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,727] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 17 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,727] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,727] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,728] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,729] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,729] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,730] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,731] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,732] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-05-01 08:51:40,733] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-05-01 08:51:40,821] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-19ea082b-9ba1-4d70-9565-10711f7484b9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,821] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group e55cdecf-bd7f-4245-8ff0-8ac852d4496f in Empty state. Created a new member id consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3-b7d212f5-01d6-47d7-aed1-e1b63c70da80 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,838] INFO [GroupCoordinator 1]: Preparing to rebalance group e55cdecf-bd7f-4245-8ff0-8ac852d4496f in state PreparingRebalance with old generation 0 (__consumer_offsets-4) (reason: Adding new member consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3-b7d212f5-01d6-47d7-aed1-e1b63c70da80 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:40,838] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-19ea082b-9ba1-4d70-9565-10711f7484b9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:41,510] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group be7c28cf-ee32-4168-825d-edc2db369b35 in Empty state. Created a new member id consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2-5a287fc4-d6ce-4d9e-9f70-f65617628610 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:41,513] INFO [GroupCoordinator 1]: Preparing to rebalance group be7c28cf-ee32-4168-825d-edc2db369b35 in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2-5a287fc4-d6ce-4d9e-9f70-f65617628610 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:43,849] INFO [GroupCoordinator 1]: Stabilized group e55cdecf-bd7f-4245-8ff0-8ac852d4496f generation 1 (__consumer_offsets-4) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:43,852] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:43,877] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-19ea082b-9ba1-4d70-9565-10711f7484b9 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:43,885] INFO [GroupCoordinator 1]: Assignment received from leader consumer-e55cdecf-bd7f-4245-8ff0-8ac852d4496f-3-b7d212f5-01d6-47d7-aed1-e1b63c70da80 for group e55cdecf-bd7f-4245-8ff0-8ac852d4496f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:44,515] INFO [GroupCoordinator 1]: Stabilized group be7c28cf-ee32-4168-825d-edc2db369b35 generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-05-01 08:51:44,530] INFO [GroupCoordinator 1]: Assignment received from leader consumer-be7c28cf-ee32-4168-825d-edc2db369b35-2-5a287fc4-d6ce-4d9e-9f70-f65617628610 for group be7c28cf-ee32-4168-825d-edc2db369b35 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping policy-api ... Stopping grafana ... Stopping kafka ... Stopping mariadb ... Stopping prometheus ... Stopping zookeeper ... Stopping simulator ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping zookeeper ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing policy-api ... Removing grafana ... Removing policy-db-migrator ... Removing kafka ... Removing mariadb ... Removing prometheus ... Removing zookeeper ... Removing simulator ... Removing grafana ... done Removing policy-apex-pdp ... done Removing policy-api ... done Removing kafka ... done Removing policy-pap ... done Removing policy-db-migrator ... done Removing simulator ... done Removing mariadb ... done Removing prometheus ... done Removing zookeeper ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + [[ -n /tmp/tmp.LZkIHxV7A6 ]] + rsync -av /tmp/tmp.LZkIHxV7A6/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 919,666 bytes received 95 bytes 1,839,522.00 bytes/sec total size is 919,125 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 1 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2076 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3380155831757117821.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8242223429356219361.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9176079323535964874.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-uSWi from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-uSWi/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15670873260954534875.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config15042774162260263386tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3428008477795633208.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6942840580408731328.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-uSWi from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-uSWi/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10288045589134484055.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10165376116644845651.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-uSWi from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-uSWi/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins7151507145817225300.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-uSWi from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-uSWi/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1672 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-36634 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 835 25176 0 6155 30876 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:c8:98:95 brd ff:ff:ff:ff:ff:ff inet 10.30.106.72/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85929sec preferred_lft 85929sec inet6 fe80::f816:3eff:fec8:9895/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:68:14:5a:e2 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-36634) 05/01/24 _x86_64_ (8 CPU) 08:47:03 LINUX RESTART (8 CPU) 08:48:01 tps rtps wtps bread/s bwrtn/s 08:49:01 117.73 27.43 90.30 2079.52 19105.08 08:50:01 107.58 9.33 98.25 1638.53 21551.34 08:51:01 239.33 3.63 235.69 421.00 133304.85 08:52:01 290.27 9.53 280.74 405.67 18508.88 08:53:01 11.06 0.02 11.05 3.60 8974.92 08:54:01 51.82 0.02 51.81 1.07 10704.57 Average: 136.30 8.33 127.97 758.23 35358.27 08:48:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:49:01 30018468 31745644 2920752 8.87 78764 1951068 1411116 4.15 830052 1780572 134364 08:50:01 27271068 31688272 5668152 17.21 128228 4474688 1419068 4.18 990188 4211512 2356176 08:51:01 25590940 31436832 7348280 22.31 141608 5826940 4382356 12.89 1268692 5524384 636 08:52:01 23622568 29613272 9316652 28.28 157244 5942176 8706996 25.62 3253452 5457408 344 08:53:01 23649804 29641264 9289416 28.20 157412 5942508 8723128 25.67 3225840 5456244 180 08:54:01 25168268 31177620 7770952 23.59 158336 5969560 2341860 6.89 1759448 5451884 80 Average: 25886853 30883817 7052367 21.41 136932 5017823 4497421 13.23 1887945 4647001 415297 08:48:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 08:49:01 ens3 55.71 39.56 929.73 8.57 0.00 0.00 0.00 0.00 08:49:01 lo 1.27 1.27 0.15 0.15 0.00 0.00 0.00 0.00 08:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:50:01 ens3 759.21 403.12 17539.39 30.95 0.00 0.00 0.00 0.00 08:50:01 lo 9.07 9.07 0.89 0.89 0.00 0.00 0.00 0.00 08:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:50:01 br-d5366b076abb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:51:01 veth0f8ece5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:51:01 ens3 394.20 212.73 13597.44 15.28 0.00 0.00 0.00 0.00 08:51:01 lo 4.00 4.00 0.38 0.38 0.00 0.00 0.00 0.00 08:51:01 vethf3d3ff0 0.00 0.12 0.00 0.01 0.00 0.00 0.00 0.00 08:52:01 veth0f8ece5 0.47 0.78 0.05 1.10 0.00 0.00 0.00 0.00 08:52:01 ens3 4.33 3.78 1.21 1.15 0.00 0.00 0.00 0.00 08:52:01 lo 2.68 2.68 2.43 2.43 0.00 0.00 0.00 0.00 08:52:01 vethf3d3ff0 5.10 6.42 0.82 0.92 0.00 0.00 0.00 0.00 08:53:01 veth0f8ece5 0.57 0.58 0.05 1.52 0.00 0.00 0.00 0.00 08:53:01 ens3 2.87 2.32 0.60 0.52 0.00 0.00 0.00 0.00 08:53:01 lo 6.07 6.07 1.37 1.37 0.00 0.00 0.00 0.00 08:53:01 vethf3d3ff0 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 08:54:01 ens3 15.16 13.96 11.16 16.82 0.00 0.00 0.00 0.00 08:54:01 lo 6.58 6.58 0.54 0.54 0.00 0.00 0.00 0.00 08:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:54:01 br-d5366b076abb 4.18 4.65 1.99 2.18 0.00 0.00 0.00 0.00 Average: ens3 205.25 112.58 5346.59 12.21 0.00 0.00 0.00 0.00 Average: lo 4.94 4.94 0.96 0.96 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: br-d5366b076abb 0.70 0.77 0.33 0.36 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-36634) 05/01/24 _x86_64_ (8 CPU) 08:47:03 LINUX RESTART (8 CPU) 08:48:01 CPU %user %nice %system %iowait %steal %idle 08:49:01 all 10.30 0.00 0.81 2.09 0.03 86.77 08:49:01 0 11.39 0.00 0.87 0.84 0.05 86.86 08:49:01 1 3.76 0.00 0.65 3.94 0.05 91.60 08:49:01 2 7.97 0.00 0.87 0.15 0.02 91.00 08:49:01 3 0.97 0.00 0.33 0.10 0.02 98.58 08:49:01 4 20.01 0.00 1.22 1.13 0.03 77.61 08:49:01 5 2.87 0.00 0.50 8.91 0.02 87.70 08:49:01 6 24.77 0.00 1.25 1.29 0.03 72.65 08:49:01 7 10.64 0.00 0.70 0.42 0.03 88.21 08:50:01 all 12.70 0.00 3.78 1.95 0.06 81.51 08:50:01 0 21.92 0.00 4.03 1.19 0.07 72.79 08:50:01 1 7.59 0.00 3.88 4.28 0.08 84.17 08:50:01 2 14.15 0.00 3.44 0.40 0.05 81.95 08:50:01 3 23.32 0.00 4.69 1.32 0.07 70.60 08:50:01 4 11.85 0.00 4.47 0.29 0.03 83.36 08:50:01 5 5.18 0.00 3.76 7.76 0.05 83.25 08:50:01 6 8.59 0.00 3.09 0.32 0.05 87.95 08:50:01 7 8.98 0.00 2.91 0.02 0.03 88.06 08:51:01 all 6.06 0.00 2.75 10.09 0.04 81.06 08:51:01 0 5.01 0.00 3.31 49.40 0.05 42.23 08:51:01 1 5.84 0.00 2.69 4.95 0.03 86.49 08:51:01 2 7.53 0.00 2.57 2.92 0.03 86.94 08:51:01 3 7.86 0.00 2.51 14.13 0.05 75.44 08:51:01 4 6.19 0.00 3.13 0.76 0.07 89.86 08:51:01 5 5.25 0.00 3.45 1.38 0.03 89.88 08:51:01 6 6.21 0.00 2.09 5.39 0.03 86.28 08:51:01 7 4.59 0.00 2.25 1.98 0.03 91.15 08:52:01 all 28.03 0.00 3.42 2.04 0.08 66.43 08:52:01 0 31.87 0.00 3.75 0.90 0.08 63.39 08:52:01 1 26.65 0.00 3.68 2.89 0.10 66.69 08:52:01 2 23.48 0.00 3.27 1.51 0.10 71.63 08:52:01 3 26.92 0.00 3.62 6.12 0.08 63.25 08:52:01 4 29.79 0.00 3.34 0.29 0.08 66.50 08:52:01 5 27.42 0.00 3.21 1.41 0.08 67.88 08:52:01 6 27.91 0.00 2.83 2.06 0.07 67.13 08:52:01 7 30.15 0.00 3.70 1.12 0.08 64.94 08:53:01 all 3.39 0.00 0.34 0.72 0.05 95.49 08:53:01 0 2.72 0.00 0.27 0.03 0.07 96.92 08:53:01 1 3.15 0.00 0.43 0.00 0.05 96.36 08:53:01 2 3.39 0.00 0.47 0.00 0.05 96.09 08:53:01 3 3.84 0.00 0.27 5.52 0.07 90.30 08:53:01 4 2.95 0.00 0.22 0.23 0.02 96.58 08:53:01 5 5.01 0.00 0.50 0.00 0.05 94.44 08:53:01 6 3.52 0.00 0.38 0.00 0.05 96.04 08:53:01 7 2.54 0.00 0.20 0.03 0.03 97.20 08:54:01 all 1.35 0.00 0.47 0.77 0.04 97.37 08:54:01 0 1.30 0.00 0.60 0.64 0.03 97.43 08:54:01 1 1.15 0.00 0.55 0.08 0.03 98.18 08:54:01 2 0.93 0.00 0.50 0.03 0.05 98.48 08:54:01 3 0.99 0.00 0.29 4.82 0.05 93.86 08:54:01 4 1.54 0.00 0.45 0.08 0.03 97.89 08:54:01 5 1.15 0.00 0.45 0.28 0.03 98.08 08:54:01 6 1.22 0.00 0.47 0.17 0.03 98.11 08:54:01 7 2.47 0.00 0.43 0.10 0.07 96.93 Average: all 10.29 0.00 1.92 2.94 0.05 84.79 Average: 0 12.36 0.00 2.13 8.78 0.06 76.67 Average: 1 8.01 0.00 1.97 2.68 0.06 87.28 Average: 2 9.56 0.00 1.85 0.84 0.05 87.71 Average: 3 10.62 0.00 1.94 5.33 0.06 82.05 Average: 4 12.05 0.00 2.13 0.46 0.04 85.31 Average: 5 7.82 0.00 1.97 3.28 0.04 86.88 Average: 6 12.04 0.00 1.68 1.53 0.04 84.70 Average: 7 9.89 0.00 1.70 0.61 0.05 87.75