11:06:38 Started by upstream project "policy-docker-master-merge-java" build number 354 11:06:38 originally caused by: 11:06:38 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137748 11:06:38 Running as SYSTEM 11:06:38 [EnvInject] - Loading node environment variables. 11:06:38 Building remotely on prd-ubuntu1804-docker-8c-8g-26003 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 11:06:38 [ssh-agent] Looking for ssh-agent implementation... 11:06:38 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 11:06:38 $ ssh-agent 11:06:38 SSH_AUTH_SOCK=/tmp/ssh-zkjMvRKd0n0L/agent.2094 11:06:38 SSH_AGENT_PID=2096 11:06:38 [ssh-agent] Started. 11:06:38 Running ssh-add (command line suppressed) 11:06:38 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9276211802764999686.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9276211802764999686.key) 11:06:38 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 11:06:38 The recommended git tool is: NONE 11:06:40 using credential onap-jenkins-ssh 11:06:40 Wiping out workspace first. 11:06:40 Cloning the remote Git repository 11:06:40 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 11:06:40 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 11:06:40 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 11:06:40 > git --version # timeout=10 11:06:40 > git --version # 'git version 2.17.1' 11:06:40 using GIT_SSH to set credentials Gerrit user 11:06:40 Verifying host key using manually-configured host key entries 11:06:40 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 11:06:40 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 11:06:40 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 11:06:41 Avoid second fetch 11:06:41 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 11:06:41 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) 11:06:41 > git config core.sparsecheckout # timeout=10 11:06:41 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 11:06:41 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" 11:06:41 > git rev-list --no-walk b5981c8a48d21908d0ead6dc8d35b982c1917eb7 # timeout=10 11:06:41 provisioning config files... 11:06:41 copy managed file [npmrc] to file:/home/jenkins/.npmrc 11:06:41 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 11:06:41 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12674761173410683312.sh 11:06:41 ---> python-tools-install.sh 11:06:41 Setup pyenv: 11:06:41 * system (set by /opt/pyenv/version) 11:06:41 * 3.8.13 (set by /opt/pyenv/version) 11:06:41 * 3.9.13 (set by /opt/pyenv/version) 11:06:41 * 3.10.6 (set by /opt/pyenv/version) 11:06:46 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-o6aK 11:06:46 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 11:06:49 lf-activate-venv(): INFO: Installing: lftools 11:07:25 lf-activate-venv(): INFO: Adding /tmp/venv-o6aK/bin to PATH 11:07:25 Generating Requirements File 11:07:53 Python 3.10.6 11:07:53 pip 24.0 from /tmp/venv-o6aK/lib/python3.10/site-packages/pip (python 3.10) 11:07:53 appdirs==1.4.4 11:07:53 argcomplete==3.3.0 11:07:53 aspy.yaml==1.3.0 11:07:53 attrs==23.2.0 11:07:53 autopage==0.5.2 11:07:53 beautifulsoup4==4.12.3 11:07:53 boto3==1.34.91 11:07:53 botocore==1.34.91 11:07:53 bs4==0.0.2 11:07:53 cachetools==5.3.3 11:07:53 certifi==2024.2.2 11:07:53 cffi==1.16.0 11:07:53 cfgv==3.4.0 11:07:53 chardet==5.2.0 11:07:53 charset-normalizer==3.3.2 11:07:53 click==8.1.7 11:07:53 cliff==4.6.0 11:07:53 cmd2==2.4.3 11:07:53 cryptography==3.3.2 11:07:53 debtcollector==3.0.0 11:07:53 decorator==5.1.1 11:07:53 defusedxml==0.7.1 11:07:53 Deprecated==1.2.14 11:07:53 distlib==0.3.8 11:07:53 dnspython==2.6.1 11:07:53 docker==4.2.2 11:07:53 dogpile.cache==1.3.2 11:07:53 email_validator==2.1.1 11:07:53 filelock==3.13.4 11:07:53 future==1.0.0 11:07:53 gitdb==4.0.11 11:07:53 GitPython==3.1.43 11:07:53 google-auth==2.29.0 11:07:53 httplib2==0.22.0 11:07:53 identify==2.5.36 11:07:53 idna==3.7 11:07:53 importlib-resources==1.5.0 11:07:53 iso8601==2.1.0 11:07:53 Jinja2==3.1.3 11:07:53 jmespath==1.0.1 11:07:53 jsonpatch==1.33 11:07:53 jsonpointer==2.4 11:07:53 jsonschema==4.21.1 11:07:53 jsonschema-specifications==2023.12.1 11:07:53 keystoneauth1==5.6.0 11:07:53 kubernetes==29.0.0 11:07:53 lftools==0.37.10 11:07:53 lxml==5.2.1 11:07:53 MarkupSafe==2.1.5 11:07:53 msgpack==1.0.8 11:07:53 multi_key_dict==2.0.3 11:07:53 munch==4.0.0 11:07:53 netaddr==1.2.1 11:07:53 netifaces==0.11.0 11:07:53 niet==1.4.2 11:07:53 nodeenv==1.8.0 11:07:53 oauth2client==4.1.3 11:07:53 oauthlib==3.2.2 11:07:53 openstacksdk==3.1.0 11:07:53 os-client-config==2.1.0 11:07:53 os-service-types==1.7.0 11:07:53 osc-lib==3.0.1 11:07:53 oslo.config==9.4.0 11:07:53 oslo.context==5.5.0 11:07:53 oslo.i18n==6.3.0 11:07:53 oslo.log==5.5.1 11:07:53 oslo.serialization==5.4.0 11:07:53 oslo.utils==7.1.0 11:07:53 packaging==24.0 11:07:53 pbr==6.0.0 11:07:53 platformdirs==4.2.1 11:07:53 prettytable==3.10.0 11:07:53 pyasn1==0.6.0 11:07:53 pyasn1_modules==0.4.0 11:07:53 pycparser==2.22 11:07:53 pygerrit2==2.0.15 11:07:53 PyGithub==2.3.0 11:07:53 pyinotify==0.9.6 11:07:53 PyJWT==2.8.0 11:07:53 PyNaCl==1.5.0 11:07:53 pyparsing==2.4.7 11:07:53 pyperclip==1.8.2 11:07:53 pyrsistent==0.20.0 11:07:53 python-cinderclient==9.5.0 11:07:53 python-dateutil==2.9.0.post0 11:07:53 python-heatclient==3.5.0 11:07:53 python-jenkins==1.8.2 11:07:53 python-keystoneclient==5.4.0 11:07:53 python-magnumclient==4.4.0 11:07:53 python-novaclient==18.6.0 11:07:53 python-openstackclient==6.6.0 11:07:53 python-swiftclient==4.5.0 11:07:53 PyYAML==6.0.1 11:07:53 referencing==0.35.0 11:07:53 requests==2.31.0 11:07:53 requests-oauthlib==2.0.0 11:07:53 requestsexceptions==1.4.0 11:07:53 rfc3986==2.0.0 11:07:53 rpds-py==0.18.0 11:07:53 rsa==4.9 11:07:53 ruamel.yaml==0.18.6 11:07:53 ruamel.yaml.clib==0.2.8 11:07:53 s3transfer==0.10.1 11:07:53 simplejson==3.19.2 11:07:53 six==1.16.0 11:07:53 smmap==5.0.1 11:07:53 soupsieve==2.5 11:07:53 stevedore==5.2.0 11:07:53 tabulate==0.9.0 11:07:53 toml==0.10.2 11:07:53 tomlkit==0.12.4 11:07:53 tqdm==4.66.2 11:07:53 typing_extensions==4.11.0 11:07:53 tzdata==2024.1 11:07:53 urllib3==1.26.18 11:07:53 virtualenv==20.26.0 11:07:53 wcwidth==0.2.13 11:07:53 websocket-client==1.8.0 11:07:53 wrapt==1.16.0 11:07:53 xdg==6.0.0 11:07:53 xmltodict==0.13.0 11:07:53 yq==3.4.1 11:07:53 [EnvInject] - Injecting environment variables from a build step. 11:07:53 [EnvInject] - Injecting as environment variables the properties content 11:07:53 SET_JDK_VERSION=openjdk17 11:07:53 GIT_URL="git://cloud.onap.org/mirror" 11:07:53 11:07:53 [EnvInject] - Variables injected successfully. 11:07:53 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins12116953672105964344.sh 11:07:53 ---> update-java-alternatives.sh 11:07:54 ---> Updating Java version 11:07:54 ---> Ubuntu/Debian system detected 11:07:54 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 11:07:54 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 11:07:54 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 11:07:54 openjdk version "17.0.4" 2022-07-19 11:07:54 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 11:07:54 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 11:07:54 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 11:07:54 [EnvInject] - Injecting environment variables from a build step. 11:07:54 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 11:07:54 [EnvInject] - Variables injected successfully. 11:07:54 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins872052589966030122.sh 11:07:54 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 11:07:54 + set +u 11:07:54 + save_set 11:07:54 + RUN_CSIT_SAVE_SET=ehxB 11:07:54 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 11:07:54 + '[' 1 -eq 0 ']' 11:07:54 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:07:54 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:07:54 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:07:54 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 11:07:54 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 11:07:54 + export ROBOT_VARIABLES= 11:07:54 + ROBOT_VARIABLES= 11:07:54 + export PROJECT=pap 11:07:54 + PROJECT=pap 11:07:54 + cd /w/workspace/policy-pap-master-project-csit-pap 11:07:54 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:07:54 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:07:54 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 11:07:54 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 11:07:54 + relax_set 11:07:54 + set +e 11:07:54 + set +o pipefail 11:07:54 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 11:07:54 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:07:54 +++ mktemp -d 11:07:54 ++ ROBOT_VENV=/tmp/tmp.QP1iXFWHVN 11:07:54 ++ echo ROBOT_VENV=/tmp/tmp.QP1iXFWHVN 11:07:54 +++ python3 --version 11:07:54 ++ echo 'Python version is: Python 3.6.9' 11:07:54 Python version is: Python 3.6.9 11:07:54 ++ python3 -m venv --clear /tmp/tmp.QP1iXFWHVN 11:07:56 ++ source /tmp/tmp.QP1iXFWHVN/bin/activate 11:07:56 +++ deactivate nondestructive 11:07:56 +++ '[' -n '' ']' 11:07:56 +++ '[' -n '' ']' 11:07:56 +++ '[' -n /bin/bash -o -n '' ']' 11:07:56 +++ hash -r 11:07:56 +++ '[' -n '' ']' 11:07:56 +++ unset VIRTUAL_ENV 11:07:56 +++ '[' '!' nondestructive = nondestructive ']' 11:07:56 +++ VIRTUAL_ENV=/tmp/tmp.QP1iXFWHVN 11:07:56 +++ export VIRTUAL_ENV 11:07:56 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:07:56 +++ PATH=/tmp/tmp.QP1iXFWHVN/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:07:56 +++ export PATH 11:07:56 +++ '[' -n '' ']' 11:07:56 +++ '[' -z '' ']' 11:07:56 +++ _OLD_VIRTUAL_PS1= 11:07:56 +++ '[' 'x(tmp.QP1iXFWHVN) ' '!=' x ']' 11:07:56 +++ PS1='(tmp.QP1iXFWHVN) ' 11:07:56 +++ export PS1 11:07:56 +++ '[' -n /bin/bash -o -n '' ']' 11:07:56 +++ hash -r 11:07:56 ++ set -exu 11:07:56 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 11:08:00 ++ echo 'Installing Python Requirements' 11:08:00 Installing Python Requirements 11:08:00 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 11:08:19 ++ python3 -m pip -qq freeze 11:08:19 bcrypt==4.0.1 11:08:19 beautifulsoup4==4.12.3 11:08:19 bitarray==2.9.2 11:08:19 certifi==2024.2.2 11:08:19 cffi==1.15.1 11:08:19 charset-normalizer==2.0.12 11:08:19 cryptography==40.0.2 11:08:19 decorator==5.1.1 11:08:19 elasticsearch==7.17.9 11:08:19 elasticsearch-dsl==7.4.1 11:08:19 enum34==1.1.10 11:08:19 idna==3.7 11:08:19 importlib-resources==5.4.0 11:08:19 ipaddr==2.2.0 11:08:19 isodate==0.6.1 11:08:19 jmespath==0.10.0 11:08:19 jsonpatch==1.32 11:08:19 jsonpath-rw==1.4.0 11:08:19 jsonpointer==2.3 11:08:19 lxml==5.2.1 11:08:19 netaddr==0.8.0 11:08:19 netifaces==0.11.0 11:08:19 odltools==0.1.28 11:08:19 paramiko==3.4.0 11:08:19 pkg_resources==0.0.0 11:08:19 ply==3.11 11:08:19 pyang==2.6.0 11:08:19 pyangbind==0.8.1 11:08:19 pycparser==2.21 11:08:19 pyhocon==0.3.60 11:08:19 PyNaCl==1.5.0 11:08:19 pyparsing==3.1.2 11:08:19 python-dateutil==2.9.0.post0 11:08:19 regex==2023.8.8 11:08:19 requests==2.27.1 11:08:19 robotframework==6.1.1 11:08:19 robotframework-httplibrary==0.4.2 11:08:19 robotframework-pythonlibcore==3.0.0 11:08:19 robotframework-requests==0.9.4 11:08:19 robotframework-selenium2library==3.0.0 11:08:19 robotframework-seleniumlibrary==5.1.3 11:08:19 robotframework-sshlibrary==3.8.0 11:08:19 scapy==2.5.0 11:08:19 scp==0.14.5 11:08:19 selenium==3.141.0 11:08:19 six==1.16.0 11:08:19 soupsieve==2.3.2.post1 11:08:19 urllib3==1.26.18 11:08:19 waitress==2.0.0 11:08:19 WebOb==1.8.7 11:08:19 WebTest==3.0.0 11:08:19 zipp==3.6.0 11:08:19 ++ mkdir -p /tmp/tmp.QP1iXFWHVN/src/onap 11:08:19 ++ rm -rf /tmp/tmp.QP1iXFWHVN/src/onap/testsuite 11:08:19 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 11:08:25 ++ echo 'Installing python confluent-kafka library' 11:08:25 Installing python confluent-kafka library 11:08:25 ++ python3 -m pip install -qq confluent-kafka 11:08:26 ++ echo 'Uninstall docker-py and reinstall docker.' 11:08:26 Uninstall docker-py and reinstall docker. 11:08:26 ++ python3 -m pip uninstall -y -qq docker 11:08:27 ++ python3 -m pip install -U -qq docker 11:08:28 ++ python3 -m pip -qq freeze 11:08:28 bcrypt==4.0.1 11:08:28 beautifulsoup4==4.12.3 11:08:28 bitarray==2.9.2 11:08:28 certifi==2024.2.2 11:08:28 cffi==1.15.1 11:08:28 charset-normalizer==2.0.12 11:08:28 confluent-kafka==2.3.0 11:08:28 cryptography==40.0.2 11:08:28 decorator==5.1.1 11:08:28 deepdiff==5.7.0 11:08:28 dnspython==2.2.1 11:08:28 docker==5.0.3 11:08:28 elasticsearch==7.17.9 11:08:28 elasticsearch-dsl==7.4.1 11:08:28 enum34==1.1.10 11:08:28 future==1.0.0 11:08:28 idna==3.7 11:08:28 importlib-resources==5.4.0 11:08:28 ipaddr==2.2.0 11:08:28 isodate==0.6.1 11:08:28 Jinja2==3.0.3 11:08:28 jmespath==0.10.0 11:08:28 jsonpatch==1.32 11:08:28 jsonpath-rw==1.4.0 11:08:28 jsonpointer==2.3 11:08:28 kafka-python==2.0.2 11:08:28 lxml==5.2.1 11:08:28 MarkupSafe==2.0.1 11:08:28 more-itertools==5.0.0 11:08:28 netaddr==0.8.0 11:08:28 netifaces==0.11.0 11:08:28 odltools==0.1.28 11:08:28 ordered-set==4.0.2 11:08:28 paramiko==3.4.0 11:08:28 pbr==6.0.0 11:08:28 pkg_resources==0.0.0 11:08:28 ply==3.11 11:08:28 protobuf==3.19.6 11:08:28 pyang==2.6.0 11:08:28 pyangbind==0.8.1 11:08:28 pycparser==2.21 11:08:28 pyhocon==0.3.60 11:08:28 PyNaCl==1.5.0 11:08:28 pyparsing==3.1.2 11:08:28 python-dateutil==2.9.0.post0 11:08:28 PyYAML==6.0.1 11:08:28 regex==2023.8.8 11:08:28 requests==2.27.1 11:08:28 robotframework==6.1.1 11:08:28 robotframework-httplibrary==0.4.2 11:08:28 robotframework-onap==0.6.0.dev105 11:08:28 robotframework-pythonlibcore==3.0.0 11:08:28 robotframework-requests==0.9.4 11:08:28 robotframework-selenium2library==3.0.0 11:08:28 robotframework-seleniumlibrary==5.1.3 11:08:28 robotframework-sshlibrary==3.8.0 11:08:28 robotlibcore-temp==1.0.2 11:08:28 scapy==2.5.0 11:08:28 scp==0.14.5 11:08:28 selenium==3.141.0 11:08:28 six==1.16.0 11:08:28 soupsieve==2.3.2.post1 11:08:28 urllib3==1.26.18 11:08:28 waitress==2.0.0 11:08:28 WebOb==1.8.7 11:08:28 websocket-client==1.3.1 11:08:28 WebTest==3.0.0 11:08:28 zipp==3.6.0 11:08:28 ++ uname 11:08:28 ++ grep -q Linux 11:08:28 ++ sudo apt-get -y -qq install libxml2-utils 11:08:28 + load_set 11:08:28 + _setopts=ehuxB 11:08:28 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 11:08:28 ++ tr : ' ' 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o braceexpand 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o hashall 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o interactive-comments 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o nounset 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o xtrace 11:08:28 ++ echo ehuxB 11:08:28 ++ sed 's/./& /g' 11:08:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:08:28 + set +e 11:08:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:08:28 + set +h 11:08:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:08:28 + set +u 11:08:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:08:28 + set +x 11:08:28 + source_safely /tmp/tmp.QP1iXFWHVN/bin/activate 11:08:28 + '[' -z /tmp/tmp.QP1iXFWHVN/bin/activate ']' 11:08:28 + relax_set 11:08:28 + set +e 11:08:28 + set +o pipefail 11:08:28 + . /tmp/tmp.QP1iXFWHVN/bin/activate 11:08:28 ++ deactivate nondestructive 11:08:28 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 11:08:28 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:08:28 ++ export PATH 11:08:28 ++ unset _OLD_VIRTUAL_PATH 11:08:28 ++ '[' -n '' ']' 11:08:28 ++ '[' -n /bin/bash -o -n '' ']' 11:08:28 ++ hash -r 11:08:28 ++ '[' -n '' ']' 11:08:28 ++ unset VIRTUAL_ENV 11:08:28 ++ '[' '!' nondestructive = nondestructive ']' 11:08:28 ++ VIRTUAL_ENV=/tmp/tmp.QP1iXFWHVN 11:08:28 ++ export VIRTUAL_ENV 11:08:28 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:08:28 ++ PATH=/tmp/tmp.QP1iXFWHVN/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:08:28 ++ export PATH 11:08:28 ++ '[' -n '' ']' 11:08:28 ++ '[' -z '' ']' 11:08:28 ++ _OLD_VIRTUAL_PS1='(tmp.QP1iXFWHVN) ' 11:08:28 ++ '[' 'x(tmp.QP1iXFWHVN) ' '!=' x ']' 11:08:28 ++ PS1='(tmp.QP1iXFWHVN) (tmp.QP1iXFWHVN) ' 11:08:28 ++ export PS1 11:08:28 ++ '[' -n /bin/bash -o -n '' ']' 11:08:28 ++ hash -r 11:08:28 + load_set 11:08:28 + _setopts=hxB 11:08:28 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:08:28 ++ tr : ' ' 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o braceexpand 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o hashall 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o interactive-comments 11:08:28 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:08:28 + set +o xtrace 11:08:28 ++ echo hxB 11:08:28 ++ sed 's/./& /g' 11:08:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:08:28 + set +h 11:08:28 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:08:28 + set +x 11:08:28 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 11:08:28 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 11:08:28 + export TEST_OPTIONS= 11:08:28 + TEST_OPTIONS= 11:08:28 ++ mktemp -d 11:08:28 + WORKDIR=/tmp/tmp.t5Zgostjf0 11:08:28 + cd /tmp/tmp.t5Zgostjf0 11:08:28 + docker login -u docker -p docker nexus3.onap.org:10001 11:08:29 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 11:08:29 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 11:08:29 Configure a credential helper to remove this warning. See 11:08:29 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 11:08:29 11:08:29 Login Succeeded 11:08:29 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:08:29 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 11:08:29 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 11:08:29 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:08:29 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:08:29 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 11:08:29 + relax_set 11:08:29 + set +e 11:08:29 + set +o pipefail 11:08:29 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:08:29 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 11:08:29 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:08:29 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 11:08:29 +++ GERRIT_BRANCH=master 11:08:29 +++ echo GERRIT_BRANCH=master 11:08:29 GERRIT_BRANCH=master 11:08:29 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 11:08:29 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 11:08:29 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 11:08:29 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 11:08:30 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 11:08:30 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 11:08:30 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 11:08:30 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 11:08:30 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 11:08:30 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 11:08:30 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 11:08:30 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:08:30 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 11:08:30 +++ grafana=false 11:08:30 +++ gui=false 11:08:30 +++ [[ 2 -gt 0 ]] 11:08:30 +++ key=apex-pdp 11:08:30 +++ case $key in 11:08:30 +++ echo apex-pdp 11:08:30 apex-pdp 11:08:30 +++ component=apex-pdp 11:08:30 +++ shift 11:08:30 +++ [[ 1 -gt 0 ]] 11:08:30 +++ key=--grafana 11:08:30 +++ case $key in 11:08:30 +++ grafana=true 11:08:30 +++ shift 11:08:30 +++ [[ 0 -gt 0 ]] 11:08:30 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 11:08:30 +++ echo 'Configuring docker compose...' 11:08:30 Configuring docker compose... 11:08:30 +++ source export-ports.sh 11:08:30 +++ source get-versions.sh 11:08:32 +++ '[' -z pap ']' 11:08:32 +++ '[' -n apex-pdp ']' 11:08:32 +++ '[' apex-pdp == logs ']' 11:08:32 +++ '[' true = true ']' 11:08:32 +++ echo 'Starting apex-pdp application with Grafana' 11:08:32 Starting apex-pdp application with Grafana 11:08:32 +++ docker-compose up -d apex-pdp grafana 11:08:33 Creating network "compose_default" with the default driver 11:08:33 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 11:08:34 latest: Pulling from prom/prometheus 11:08:38 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 11:08:38 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 11:08:38 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 11:08:38 latest: Pulling from grafana/grafana 11:08:43 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 11:08:43 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 11:08:43 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 11:08:44 10.10.2: Pulling from mariadb 11:08:49 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 11:08:49 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 11:08:49 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 11:08:49 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 11:08:53 Digest: sha256:e36a65d94835ba788264a5efe3d68880b46c12adcf1404808524de7b4d7c0e41 11:08:53 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 11:08:53 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 11:08:54 latest: Pulling from confluentinc/cp-zookeeper 11:09:08 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 11:09:08 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 11:09:08 Pulling kafka (confluentinc/cp-kafka:latest)... 11:09:12 latest: Pulling from confluentinc/cp-kafka 11:09:18 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 11:09:18 Status: Downloaded newer image for confluentinc/cp-kafka:latest 11:09:18 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 11:09:18 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 11:09:27 Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 11:09:27 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 11:09:27 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 11:09:27 3.1.2-SNAPSHOT: Pulling from onap/policy-api 11:09:35 Digest: sha256:0e8cbccfee833c5b2be68d71dd51902b884e77df24bbbac2751693f58bdc20ce 11:09:36 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 11:09:36 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 11:09:38 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 11:09:40 Digest: sha256:4424490684da433df5069c1f1dbbafe83fffd4c8b6a174807fb10d6443ecef06 11:09:40 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 11:09:40 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 11:09:40 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 11:09:47 Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 11:09:47 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 11:09:47 Creating simulator ... 11:09:48 Creating zookeeper ... 11:09:48 Creating prometheus ... 11:09:48 Creating mariadb ... 11:09:57 Creating zookeeper ... done 11:09:57 Creating kafka ... 11:09:58 Creating kafka ... done 11:09:59 Creating prometheus ... done 11:09:59 Creating grafana ... 11:10:00 Creating grafana ... done 11:10:01 Creating simulator ... done 11:10:02 Creating mariadb ... done 11:10:02 Creating policy-db-migrator ... 11:10:03 Creating policy-db-migrator ... done 11:10:03 Creating policy-api ... 11:10:04 Creating policy-api ... done 11:10:04 Creating policy-pap ... 11:10:05 Creating policy-pap ... done 11:10:05 Creating policy-apex-pdp ... 11:10:07 Creating policy-apex-pdp ... done 11:10:07 +++ echo 'Prometheus server: http://localhost:30259' 11:10:07 Prometheus server: http://localhost:30259 11:10:07 +++ echo 'Grafana server: http://localhost:30269' 11:10:07 Grafana server: http://localhost:30269 11:10:07 +++ cd /w/workspace/policy-pap-master-project-csit-pap 11:10:07 ++ sleep 10 11:10:17 ++ unset http_proxy https_proxy 11:10:17 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 11:10:17 Waiting for REST to come up on localhost port 30003... 11:10:17 NAMES STATUS 11:10:17 policy-apex-pdp Up 10 seconds 11:10:17 policy-pap Up 11 seconds 11:10:17 policy-api Up 12 seconds 11:10:17 grafana Up 17 seconds 11:10:17 kafka Up 19 seconds 11:10:17 mariadb Up 14 seconds 11:10:17 prometheus Up 17 seconds 11:10:17 simulator Up 15 seconds 11:10:17 zookeeper Up 20 seconds 11:10:22 NAMES STATUS 11:10:22 policy-apex-pdp Up 15 seconds 11:10:22 policy-pap Up 16 seconds 11:10:22 policy-api Up 17 seconds 11:10:22 grafana Up 22 seconds 11:10:22 kafka Up 24 seconds 11:10:22 mariadb Up 19 seconds 11:10:22 prometheus Up 23 seconds 11:10:22 simulator Up 20 seconds 11:10:22 zookeeper Up 25 seconds 11:10:27 NAMES STATUS 11:10:27 policy-apex-pdp Up 20 seconds 11:10:27 policy-pap Up 21 seconds 11:10:27 policy-api Up 22 seconds 11:10:27 grafana Up 27 seconds 11:10:27 kafka Up 29 seconds 11:10:27 mariadb Up 24 seconds 11:10:27 prometheus Up 28 seconds 11:10:27 simulator Up 26 seconds 11:10:27 zookeeper Up 30 seconds 11:10:32 NAMES STATUS 11:10:32 policy-apex-pdp Up 25 seconds 11:10:32 policy-pap Up 26 seconds 11:10:32 policy-api Up 27 seconds 11:10:32 grafana Up 32 seconds 11:10:32 kafka Up 34 seconds 11:10:32 mariadb Up 30 seconds 11:10:32 prometheus Up 33 seconds 11:10:32 simulator Up 31 seconds 11:10:32 zookeeper Up 35 seconds 11:10:37 NAMES STATUS 11:10:37 policy-apex-pdp Up 30 seconds 11:10:37 policy-pap Up 31 seconds 11:10:37 policy-api Up 32 seconds 11:10:37 grafana Up 37 seconds 11:10:37 kafka Up 39 seconds 11:10:37 mariadb Up 35 seconds 11:10:37 prometheus Up 38 seconds 11:10:37 simulator Up 36 seconds 11:10:37 zookeeper Up 40 seconds 11:10:42 NAMES STATUS 11:10:42 policy-apex-pdp Up 35 seconds 11:10:42 policy-pap Up 36 seconds 11:10:42 policy-api Up 37 seconds 11:10:42 grafana Up 42 seconds 11:10:42 kafka Up 44 seconds 11:10:42 mariadb Up 40 seconds 11:10:42 prometheus Up 43 seconds 11:10:42 simulator Up 41 seconds 11:10:42 zookeeper Up 45 seconds 11:10:47 NAMES STATUS 11:10:47 policy-apex-pdp Up 40 seconds 11:10:47 policy-pap Up 41 seconds 11:10:47 policy-api Up 43 seconds 11:10:47 grafana Up 47 seconds 11:10:47 kafka Up 49 seconds 11:10:47 mariadb Up 45 seconds 11:10:47 prometheus Up 48 seconds 11:10:47 simulator Up 46 seconds 11:10:47 zookeeper Up 50 seconds 11:10:47 ++ export 'SUITES=pap-test.robot 11:10:47 pap-slas.robot' 11:10:47 ++ SUITES='pap-test.robot 11:10:47 pap-slas.robot' 11:10:47 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 11:10:47 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 11:10:47 + load_set 11:10:47 + _setopts=hxB 11:10:47 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:10:47 ++ tr : ' ' 11:10:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:10:47 + set +o braceexpand 11:10:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:10:47 + set +o hashall 11:10:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:10:47 + set +o interactive-comments 11:10:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:10:47 + set +o xtrace 11:10:47 ++ echo hxB 11:10:47 ++ sed 's/./& /g' 11:10:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:10:47 + set +h 11:10:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:10:47 + set +x 11:10:47 + docker_stats 11:10:47 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 11:10:47 ++ uname -s 11:10:47 + '[' Linux == Darwin ']' 11:10:47 + sh -c 'top -bn1 | head -3' 11:10:47 top - 11:10:47 up 5 min, 0 users, load average: 3.09, 1.39, 0.56 11:10:47 Tasks: 208 total, 1 running, 132 sleeping, 0 stopped, 0 zombie 11:10:47 %Cpu(s): 13.2 us, 3.0 sy, 0.0 ni, 79.3 id, 4.4 wa, 0.0 hi, 0.1 si, 0.1 st 11:10:47 + echo 11:10:47 + sh -c 'free -h' 11:10:47 11:10:47 total used free shared buff/cache available 11:10:47 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 11:10:47 Swap: 1.0G 0B 1.0G 11:10:47 + echo 11:10:47 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 11:10:47 11:10:47 NAMES STATUS 11:10:47 policy-apex-pdp Up 40 seconds 11:10:47 policy-pap Up 42 seconds 11:10:47 policy-api Up 43 seconds 11:10:47 grafana Up 47 seconds 11:10:47 kafka Up 49 seconds 11:10:47 mariadb Up 45 seconds 11:10:47 prometheus Up 48 seconds 11:10:47 simulator Up 46 seconds 11:10:47 zookeeper Up 50 seconds 11:10:47 + echo 11:10:47 + docker stats --no-stream 11:10:47 11:10:50 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 11:10:50 478613d856b0 policy-apex-pdp 2.19% 184MiB / 31.41GiB 0.57% 7.54kB / 7.34kB 0B / 0B 48 11:10:50 c5b59a35488a policy-pap 17.35% 555.2MiB / 31.41GiB 1.73% 35kB / 36.9kB 8.19kB / 149MB 63 11:10:50 b57caca83b49 policy-api 0.24% 567MiB / 31.41GiB 1.76% 989kB / 674kB 0B / 0B 56 11:10:50 ec8478b15560 grafana 0.21% 57.8MiB / 31.41GiB 0.18% 19.1kB / 3.44kB 0B / 24.9MB 17 11:10:50 50b6ce5b4527 kafka 40.25% 375.9MiB / 31.41GiB 1.17% 74.7kB / 77.5kB 0B / 475kB 86 11:10:50 eb74c55bbb58 mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 936kB / 1.18MB 11MB / 63.8MB 39 11:10:50 ba6452c2eae9 prometheus 0.07% 18.89MiB / 31.41GiB 0.06% 28.3kB / 1.09kB 0B / 0B 13 11:10:50 4278d0702212 simulator 0.08% 120.5MiB / 31.41GiB 0.37% 1.31kB / 0B 45.1kB / 0B 76 11:10:50 2f97c4d8af81 zookeeper 0.15% 99.88MiB / 31.41GiB 0.31% 58.5kB / 51.6kB 90.1kB / 401kB 60 11:10:50 + echo 11:10:50 11:10:50 + cd /tmp/tmp.t5Zgostjf0 11:10:50 + echo 'Reading the testplan:' 11:10:50 Reading the testplan: 11:10:50 + echo 'pap-test.robot 11:10:50 pap-slas.robot' 11:10:50 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 11:10:50 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 11:10:50 + cat testplan.txt 11:10:50 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 11:10:50 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 11:10:50 ++ xargs 11:10:50 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 11:10:50 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 11:10:50 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 11:10:50 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 11:10:50 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 11:10:50 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 11:10:50 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 11:10:50 + relax_set 11:10:50 + set +e 11:10:50 + set +o pipefail 11:10:50 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 11:10:50 ============================================================================== 11:10:50 pap 11:10:50 ============================================================================== 11:10:50 pap.Pap-Test 11:10:50 ============================================================================== 11:10:51 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 11:10:51 ------------------------------------------------------------------------------ 11:10:52 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 11:10:52 ------------------------------------------------------------------------------ 11:10:52 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 11:10:52 ------------------------------------------------------------------------------ 11:10:53 Healthcheck :: Verify policy pap health check | PASS | 11:10:53 ------------------------------------------------------------------------------ 11:11:13 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 11:11:13 ------------------------------------------------------------------------------ 11:11:13 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 11:11:13 ------------------------------------------------------------------------------ 11:11:14 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 11:11:14 ------------------------------------------------------------------------------ 11:11:14 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 11:11:14 ------------------------------------------------------------------------------ 11:11:14 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 11:11:14 ------------------------------------------------------------------------------ 11:11:14 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 11:11:14 ------------------------------------------------------------------------------ 11:11:15 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 11:11:15 ------------------------------------------------------------------------------ 11:11:15 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 11:11:15 ------------------------------------------------------------------------------ 11:11:15 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 11:11:15 ------------------------------------------------------------------------------ 11:11:15 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 11:11:15 ------------------------------------------------------------------------------ 11:11:16 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 11:11:16 ------------------------------------------------------------------------------ 11:11:16 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 11:11:16 ------------------------------------------------------------------------------ 11:11:16 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 11:11:16 ------------------------------------------------------------------------------ 11:11:36 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 11:11:36 ------------------------------------------------------------------------------ 11:11:36 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 11:11:36 ------------------------------------------------------------------------------ 11:11:36 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 11:11:36 ------------------------------------------------------------------------------ 11:11:37 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 11:11:37 ------------------------------------------------------------------------------ 11:11:37 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 11:11:37 ------------------------------------------------------------------------------ 11:11:37 pap.Pap-Test | PASS | 11:11:37 22 tests, 22 passed, 0 failed 11:11:37 ============================================================================== 11:11:37 pap.Pap-Slas 11:11:37 ============================================================================== 11:12:37 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 11:12:37 ------------------------------------------------------------------------------ 11:12:37 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 11:12:37 ------------------------------------------------------------------------------ 11:12:37 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 11:12:37 ------------------------------------------------------------------------------ 11:12:37 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 11:12:37 ------------------------------------------------------------------------------ 11:12:37 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 11:12:37 ------------------------------------------------------------------------------ 11:12:37 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 11:12:37 ------------------------------------------------------------------------------ 11:12:37 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 11:12:37 ------------------------------------------------------------------------------ 11:12:37 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 11:12:37 ------------------------------------------------------------------------------ 11:12:37 pap.Pap-Slas | PASS | 11:12:37 8 tests, 8 passed, 0 failed 11:12:37 ============================================================================== 11:12:37 pap | PASS | 11:12:37 30 tests, 30 passed, 0 failed 11:12:37 ============================================================================== 11:12:37 Output: /tmp/tmp.t5Zgostjf0/output.xml 11:12:37 Log: /tmp/tmp.t5Zgostjf0/log.html 11:12:37 Report: /tmp/tmp.t5Zgostjf0/report.html 11:12:37 + RESULT=0 11:12:37 + load_set 11:12:37 + _setopts=hxB 11:12:37 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:12:37 ++ tr : ' ' 11:12:37 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:12:37 + set +o braceexpand 11:12:37 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:12:37 + set +o hashall 11:12:37 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:12:37 + set +o interactive-comments 11:12:37 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:12:37 + set +o xtrace 11:12:37 ++ echo hxB 11:12:37 ++ sed 's/./& /g' 11:12:37 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:12:37 + set +h 11:12:37 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:12:37 + set +x 11:12:37 + echo 'RESULT: 0' 11:12:37 RESULT: 0 11:12:37 + exit 0 11:12:37 + on_exit 11:12:37 + rc=0 11:12:37 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 11:12:37 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 11:12:37 NAMES STATUS 11:12:37 policy-apex-pdp Up 2 minutes 11:12:37 policy-pap Up 2 minutes 11:12:37 policy-api Up 2 minutes 11:12:37 grafana Up 2 minutes 11:12:37 kafka Up 2 minutes 11:12:37 mariadb Up 2 minutes 11:12:37 prometheus Up 2 minutes 11:12:37 simulator Up 2 minutes 11:12:37 zookeeper Up 2 minutes 11:12:37 + docker_stats 11:12:37 ++ uname -s 11:12:37 + '[' Linux == Darwin ']' 11:12:37 + sh -c 'top -bn1 | head -3' 11:12:37 top - 11:12:37 up 6 min, 0 users, load average: 0.68, 1.08, 0.54 11:12:37 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 11:12:37 %Cpu(s): 10.7 us, 2.3 sy, 0.0 ni, 83.4 id, 3.5 wa, 0.0 hi, 0.0 si, 0.1 st 11:12:37 + echo 11:12:37 11:12:37 + sh -c 'free -h' 11:12:37 total used free shared buff/cache available 11:12:37 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 11:12:37 Swap: 1.0G 0B 1.0G 11:12:37 + echo 11:12:37 11:12:37 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 11:12:37 NAMES STATUS 11:12:37 policy-apex-pdp Up 2 minutes 11:12:37 policy-pap Up 2 minutes 11:12:37 policy-api Up 2 minutes 11:12:37 grafana Up 2 minutes 11:12:37 kafka Up 2 minutes 11:12:37 mariadb Up 2 minutes 11:12:37 prometheus Up 2 minutes 11:12:37 simulator Up 2 minutes 11:12:37 zookeeper Up 2 minutes 11:12:37 + echo 11:12:37 11:12:37 + docker stats --no-stream 11:12:40 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 11:12:40 478613d856b0 policy-apex-pdp 0.35% 183.4MiB / 31.41GiB 0.57% 56.7kB / 91.2kB 0B / 0B 52 11:12:40 c5b59a35488a policy-pap 0.63% 500.2MiB / 31.41GiB 1.55% 2.47MB / 1MB 8.19kB / 149MB 67 11:12:40 b57caca83b49 policy-api 0.72% 644.2MiB / 31.41GiB 2.00% 2.45MB / 1.09MB 0B / 0B 57 11:12:40 ec8478b15560 grafana 0.03% 58.27MiB / 31.41GiB 0.18% 19.8kB / 4.39kB 0B / 24.9MB 17 11:12:40 50b6ce5b4527 kafka 7.42% 386.5MiB / 31.41GiB 1.20% 242kB / 218kB 0B / 582kB 85 11:12:40 eb74c55bbb58 mariadb 0.02% 103MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 64MB 28 11:12:40 ba6452c2eae9 prometheus 0.11% 24.95MiB / 31.41GiB 0.08% 139kB / 10.1kB 0B / 0B 13 11:12:40 4278d0702212 simulator 0.06% 120.6MiB / 31.41GiB 0.38% 1.58kB / 0B 45.1kB / 0B 78 11:12:40 2f97c4d8af81 zookeeper 0.08% 100.7MiB / 31.41GiB 0.31% 61.3kB / 53.1kB 90.1kB / 401kB 60 11:12:40 + echo 11:12:40 11:12:40 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 11:12:40 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 11:12:40 + relax_set 11:12:40 + set +e 11:12:40 + set +o pipefail 11:12:40 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 11:12:40 ++ echo 'Shut down started!' 11:12:40 Shut down started! 11:12:40 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:12:40 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 11:12:40 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 11:12:40 ++ source export-ports.sh 11:12:40 ++ source get-versions.sh 11:12:42 ++ echo 'Collecting logs from docker compose containers...' 11:12:42 Collecting logs from docker compose containers... 11:12:42 ++ docker-compose logs 11:12:43 ++ cat docker_compose.log 11:12:43 Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, grafana, kafka, mariadb, prometheus, simulator, zookeeper 11:12:43 kafka | ===> User 11:12:43 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 11:12:43 kafka | ===> Configuring ... 11:12:43 kafka | Running in Zookeeper mode... 11:12:43 kafka | ===> Running preflight checks ... 11:12:43 kafka | ===> Check if /var/lib/kafka/data is writable ... 11:12:43 kafka | ===> Check if Zookeeper is healthy ... 11:12:43 kafka | [2024-04-25 11:10:02,584] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:host.name=50b6ce5b4527 (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 11:12:43 policy-api | Waiting for mariadb port 3306... 11:12:43 policy-api | mariadb (172.17.0.5:3306) open 11:12:43 policy-api | Waiting for policy-db-migrator port 6824... 11:12:43 policy-api | policy-db-migrator (172.17.0.8:6824) open 11:12:43 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 11:12:43 policy-api | 11:12:43 policy-api | . ____ _ __ _ _ 11:12:43 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 11:12:43 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 11:12:43 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 11:12:43 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 11:12:43 policy-api | =========|_|==============|___/=/_/_/_/ 11:12:43 policy-api | :: Spring Boot :: (v3.1.10) 11:12:43 policy-api | 11:12:43 policy-api | [2024-04-25T11:10:18.265+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 11:12:43 policy-api | [2024-04-25T11:10:18.410+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) 11:12:43 policy-api | [2024-04-25T11:10:18.412+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 11:12:43 policy-api | [2024-04-25T11:10:20.970+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 11:12:43 policy-api | [2024-04-25T11:10:21.067+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 85 ms. Found 6 JPA repository interfaces. 11:12:43 policy-api | [2024-04-25T11:10:21.627+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 11:12:43 policy-api | [2024-04-25T11:10:21.630+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 11:12:43 policy-api | [2024-04-25T11:10:22.375+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 11:12:43 policy-api | [2024-04-25T11:10:22.388+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 11:12:43 policy-api | [2024-04-25T11:10:22.391+00:00|INFO|StandardService|main] Starting service [Tomcat] 11:12:43 policy-api | [2024-04-25T11:10:22.391+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 11:12:43 policy-api | [2024-04-25T11:10:22.501+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 11:12:43 policy-api | [2024-04-25T11:10:22.501+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3935 ms 11:12:43 policy-api | [2024-04-25T11:10:23.024+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 11:12:43 policy-api | [2024-04-25T11:10:23.130+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 11:12:43 policy-api | [2024-04-25T11:10:23.205+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 11:12:43 policy-api | [2024-04-25T11:10:23.641+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 11:12:43 policy-api | [2024-04-25T11:10:23.674+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 11:12:43 policy-api | [2024-04-25T11:10:23.797+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@75483843 11:12:43 policy-api | [2024-04-25T11:10:23.799+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 11:12:43 policy-api | [2024-04-25T11:10:26.096+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 11:12:43 policy-api | [2024-04-25T11:10:26.099+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 11:12:43 policy-api | [2024-04-25T11:10:27.318+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 11:12:43 policy-api | [2024-04-25T11:10:28.287+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 11:12:43 policy-api | [2024-04-25T11:10:29.819+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 11:12:43 policy-api | [2024-04-25T11:10:30.100+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6f54a7be, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4c48ccc4, org.springframework.security.web.context.SecurityContextHolderFilter@2fcd0756, org.springframework.security.web.header.HeaderWriterFilter@20a4f67a, org.springframework.security.web.authentication.logout.LogoutFilter@567dc7d7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@46270641, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5b6fd32d, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@5e47e1f, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4c32428a, org.springframework.security.web.access.ExceptionTranslationFilter@66741691, org.springframework.security.web.access.intercept.AuthorizationFilter@1d93bd2a] 11:12:43 policy-api | [2024-04-25T11:10:31.083+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 11:12:43 policy-api | [2024-04-25T11:10:31.203+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 11:12:43 policy-api | [2024-04-25T11:10:31.252+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 11:12:43 policy-api | [2024-04-25T11:10:31.278+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 13.985 seconds (process running for 14.624) 11:12:43 policy-api | [2024-04-25T11:10:39.918+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 11:12:43 policy-api | [2024-04-25T11:10:39.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 11:12:43 policy-api | [2024-04-25T11:10:39.921+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms 11:12:43 policy-api | [2024-04-25T11:10:51.099+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 11:12:43 policy-api | [] 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,586] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,587] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,587] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,587] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,587] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,590] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@61d47554 (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,593] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 11:12:43 kafka | [2024-04-25 11:10:02,598] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 11:12:43 kafka | [2024-04-25 11:10:02,607] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:02,678] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:02,678] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:02,688] INFO Socket connection established, initiating session, client: /172.17.0.6:42770, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:02,736] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003dee80000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:02,874] INFO Session: 0x1000003dee80000 closed (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:02,874] INFO EventThread shut down for session: 0x1000003dee80000 (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | Using log4j config /etc/kafka/log4j.properties 11:12:43 kafka | ===> Launching ... 11:12:43 kafka | ===> Launching kafka ... 11:12:43 kafka | [2024-04-25 11:10:03,722] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 11:12:43 kafka | [2024-04-25 11:10:04,120] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 11:12:43 kafka | [2024-04-25 11:10:04,196] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 11:12:43 kafka | [2024-04-25 11:10:04,197] INFO starting (kafka.server.KafkaServer) 11:12:43 kafka | [2024-04-25 11:10:04,197] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 11:12:43 kafka | [2024-04-25 11:10:04,211] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:host.name=50b6ce5b4527 (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,215] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,216] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,218] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) 11:12:43 kafka | [2024-04-25 11:10:04,221] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 11:12:43 kafka | [2024-04-25 11:10:04,228] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:04,234] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 11:12:43 kafka | [2024-04-25 11:10:04,240] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:04,248] INFO Socket connection established, initiating session, client: /172.17.0.6:42772, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:04,259] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003dee80001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 11:12:43 kafka | [2024-04-25 11:10:04,264] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 11:12:43 kafka | [2024-04-25 11:10:04,744] INFO Cluster ID = hj8fcuYTRGyyshpZV-zZWg (kafka.server.KafkaServer) 11:12:43 kafka | [2024-04-25 11:10:04,747] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 11:12:43 kafka | [2024-04-25 11:10:04,822] INFO KafkaConfig values: 11:12:43 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 11:12:43 kafka | alter.config.policy.class.name = null 11:12:43 kafka | alter.log.dirs.replication.quota.window.num = 11 11:12:43 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 11:12:43 kafka | authorizer.class.name = 11:12:43 kafka | auto.create.topics.enable = true 11:12:43 kafka | auto.include.jmx.reporter = true 11:12:43 kafka | auto.leader.rebalance.enable = true 11:12:43 kafka | background.threads = 10 11:12:43 kafka | broker.heartbeat.interval.ms = 2000 11:12:43 kafka | broker.id = 1 11:12:43 kafka | broker.id.generation.enable = true 11:12:43 kafka | broker.rack = null 11:12:43 kafka | broker.session.timeout.ms = 9000 11:12:43 kafka | client.quota.callback.class = null 11:12:43 kafka | compression.type = producer 11:12:43 kafka | connection.failed.authentication.delay.ms = 100 11:12:43 kafka | connections.max.idle.ms = 600000 11:12:43 kafka | connections.max.reauth.ms = 0 11:12:43 kafka | control.plane.listener.name = null 11:12:43 kafka | controlled.shutdown.enable = true 11:12:43 kafka | controlled.shutdown.max.retries = 3 11:12:43 kafka | controlled.shutdown.retry.backoff.ms = 5000 11:12:43 kafka | controller.listener.names = null 11:12:43 kafka | controller.quorum.append.linger.ms = 25 11:12:43 kafka | controller.quorum.election.backoff.max.ms = 1000 11:12:43 kafka | controller.quorum.election.timeout.ms = 1000 11:12:43 kafka | controller.quorum.fetch.timeout.ms = 2000 11:12:43 kafka | controller.quorum.request.timeout.ms = 2000 11:12:43 kafka | controller.quorum.retry.backoff.ms = 20 11:12:43 kafka | controller.quorum.voters = [] 11:12:43 kafka | controller.quota.window.num = 11 11:12:43 kafka | controller.quota.window.size.seconds = 1 11:12:43 kafka | controller.socket.timeout.ms = 30000 11:12:43 kafka | create.topic.policy.class.name = null 11:12:43 kafka | default.replication.factor = 1 11:12:43 kafka | delegation.token.expiry.check.interval.ms = 3600000 11:12:43 kafka | delegation.token.expiry.time.ms = 86400000 11:12:43 kafka | delegation.token.master.key = null 11:12:43 kafka | delegation.token.max.lifetime.ms = 604800000 11:12:43 kafka | delegation.token.secret.key = null 11:12:43 kafka | delete.records.purgatory.purge.interval.requests = 1 11:12:43 kafka | delete.topic.enable = true 11:12:43 kafka | early.start.listeners = null 11:12:43 kafka | fetch.max.bytes = 57671680 11:12:43 kafka | fetch.purgatory.purge.interval.requests = 1000 11:12:43 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 11:12:43 kafka | group.consumer.heartbeat.interval.ms = 5000 11:12:43 kafka | group.consumer.max.heartbeat.interval.ms = 15000 11:12:43 kafka | group.consumer.max.session.timeout.ms = 60000 11:12:43 kafka | group.consumer.max.size = 2147483647 11:12:43 kafka | group.consumer.min.heartbeat.interval.ms = 5000 11:12:43 kafka | group.consumer.min.session.timeout.ms = 45000 11:12:43 kafka | group.consumer.session.timeout.ms = 45000 11:12:43 kafka | group.coordinator.new.enable = false 11:12:43 kafka | group.coordinator.threads = 1 11:12:43 kafka | group.initial.rebalance.delay.ms = 3000 11:12:43 kafka | group.max.session.timeout.ms = 1800000 11:12:43 kafka | group.max.size = 2147483647 11:12:43 kafka | group.min.session.timeout.ms = 6000 11:12:43 kafka | initial.broker.registration.timeout.ms = 60000 11:12:43 kafka | inter.broker.listener.name = PLAINTEXT 11:12:43 kafka | inter.broker.protocol.version = 3.6-IV2 11:12:43 kafka | kafka.metrics.polling.interval.secs = 10 11:12:43 kafka | kafka.metrics.reporters = [] 11:12:43 kafka | leader.imbalance.check.interval.seconds = 300 11:12:43 kafka | leader.imbalance.per.broker.percentage = 10 11:12:43 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 11:12:43 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 11:12:43 kafka | log.cleaner.backoff.ms = 15000 11:12:43 kafka | log.cleaner.dedupe.buffer.size = 134217728 11:12:43 kafka | log.cleaner.delete.retention.ms = 86400000 11:12:43 kafka | log.cleaner.enable = true 11:12:43 kafka | log.cleaner.io.buffer.load.factor = 0.9 11:12:43 kafka | log.cleaner.io.buffer.size = 524288 11:12:43 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 11:12:43 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 11:12:43 kafka | log.cleaner.min.cleanable.ratio = 0.5 11:12:43 kafka | log.cleaner.min.compaction.lag.ms = 0 11:12:43 kafka | log.cleaner.threads = 1 11:12:43 kafka | log.cleanup.policy = [delete] 11:12:43 kafka | log.dir = /tmp/kafka-logs 11:12:43 kafka | log.dirs = /var/lib/kafka/data 11:12:43 kafka | log.flush.interval.messages = 9223372036854775807 11:12:43 kafka | log.flush.interval.ms = null 11:12:43 kafka | log.flush.offset.checkpoint.interval.ms = 60000 11:12:43 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 11:12:43 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 11:12:43 kafka | log.index.interval.bytes = 4096 11:12:43 kafka | log.index.size.max.bytes = 10485760 11:12:43 kafka | log.local.retention.bytes = -2 11:12:43 kafka | log.local.retention.ms = -2 11:12:43 kafka | log.message.downconversion.enable = true 11:12:43 kafka | log.message.format.version = 3.0-IV1 11:12:43 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 11:12:43 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 11:12:43 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 11:12:43 kafka | log.message.timestamp.type = CreateTime 11:12:43 kafka | log.preallocate = false 11:12:43 mariadb | 2024-04-25 11:10:02+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 11:12:43 mariadb | 2024-04-25 11:10:02+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 11:12:43 mariadb | 2024-04-25 11:10:02+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 11:12:43 mariadb | 2024-04-25 11:10:02+00:00 [Note] [Entrypoint]: Initializing database files 11:12:43 mariadb | 2024-04-25 11:10:02 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 11:12:43 mariadb | 2024-04-25 11:10:02 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 11:12:43 mariadb | 2024-04-25 11:10:02 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 11:12:43 mariadb | 11:12:43 mariadb | 11:12:43 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 11:12:43 mariadb | To do so, start the server, then issue the following command: 11:12:43 mariadb | 11:12:43 mariadb | '/usr/bin/mysql_secure_installation' 11:12:43 mariadb | 11:12:43 mariadb | which will also give you the option of removing the test 11:12:43 mariadb | databases and anonymous user created by default. This is 11:12:43 mariadb | strongly recommended for production servers. 11:12:43 kafka | log.retention.bytes = -1 11:12:43 kafka | log.retention.check.interval.ms = 300000 11:12:43 kafka | log.retention.hours = 168 11:12:43 kafka | log.retention.minutes = null 11:12:43 kafka | log.retention.ms = null 11:12:43 kafka | log.roll.hours = 168 11:12:43 kafka | log.roll.jitter.hours = 0 11:12:43 kafka | log.roll.jitter.ms = null 11:12:43 kafka | log.roll.ms = null 11:12:43 kafka | log.segment.bytes = 1073741824 11:12:43 kafka | log.segment.delete.delay.ms = 60000 11:12:43 kafka | max.connection.creation.rate = 2147483647 11:12:43 kafka | max.connections = 2147483647 11:12:43 kafka | max.connections.per.ip = 2147483647 11:12:43 kafka | max.connections.per.ip.overrides = 11:12:43 kafka | max.incremental.fetch.session.cache.slots = 1000 11:12:43 kafka | message.max.bytes = 1048588 11:12:43 kafka | metadata.log.dir = null 11:12:43 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 11:12:43 kafka | metadata.log.max.snapshot.interval.ms = 3600000 11:12:43 kafka | metadata.log.segment.bytes = 1073741824 11:12:43 kafka | metadata.log.segment.min.bytes = 8388608 11:12:43 policy-apex-pdp | Waiting for mariadb port 3306... 11:12:43 policy-apex-pdp | mariadb (172.17.0.5:3306) open 11:12:43 policy-apex-pdp | Waiting for kafka port 9092... 11:12:43 policy-apex-pdp | kafka (172.17.0.6:9092) open 11:12:43 policy-apex-pdp | Waiting for pap port 6969... 11:12:43 policy-apex-pdp | pap (172.17.0.10:6969) open 11:12:43 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.213+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.461+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:12:43 policy-apex-pdp | allow.auto.create.topics = true 11:12:43 policy-apex-pdp | auto.commit.interval.ms = 5000 11:12:43 policy-apex-pdp | auto.include.jmx.reporter = true 11:12:43 policy-apex-pdp | auto.offset.reset = latest 11:12:43 policy-apex-pdp | bootstrap.servers = [kafka:9092] 11:12:43 policy-apex-pdp | check.crcs = true 11:12:43 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 11:12:43 policy-apex-pdp | client.id = consumer-51f148c3-bcf8-4571-938a-66df08a6d568-1 11:12:43 policy-apex-pdp | client.rack = 11:12:43 policy-apex-pdp | connections.max.idle.ms = 540000 11:12:43 policy-apex-pdp | default.api.timeout.ms = 60000 11:12:43 policy-apex-pdp | enable.auto.commit = true 11:12:43 policy-apex-pdp | exclude.internal.topics = true 11:12:43 policy-apex-pdp | fetch.max.bytes = 52428800 11:12:43 policy-apex-pdp | fetch.max.wait.ms = 500 11:12:43 policy-apex-pdp | fetch.min.bytes = 1 11:12:43 policy-apex-pdp | group.id = 51f148c3-bcf8-4571-938a-66df08a6d568 11:12:43 policy-apex-pdp | group.instance.id = null 11:12:43 policy-apex-pdp | heartbeat.interval.ms = 3000 11:12:43 policy-apex-pdp | interceptor.classes = [] 11:12:43 policy-apex-pdp | internal.leave.group.on.close = true 11:12:43 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 11:12:43 policy-apex-pdp | isolation.level = read_uncommitted 11:12:43 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-apex-pdp | max.partition.fetch.bytes = 1048576 11:12:43 policy-apex-pdp | max.poll.interval.ms = 300000 11:12:43 policy-apex-pdp | max.poll.records = 500 11:12:43 policy-apex-pdp | metadata.max.age.ms = 300000 11:12:43 policy-apex-pdp | metric.reporters = [] 11:12:43 policy-apex-pdp | metrics.num.samples = 2 11:12:43 policy-apex-pdp | metrics.recording.level = INFO 11:12:43 policy-apex-pdp | metrics.sample.window.ms = 30000 11:12:43 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:12:43 policy-apex-pdp | receive.buffer.bytes = 65536 11:12:43 kafka | metadata.log.segment.ms = 604800000 11:12:43 kafka | metadata.max.idle.interval.ms = 500 11:12:43 kafka | metadata.max.retention.bytes = 104857600 11:12:43 kafka | metadata.max.retention.ms = 604800000 11:12:43 kafka | metric.reporters = [] 11:12:43 kafka | metrics.num.samples = 2 11:12:43 kafka | metrics.recording.level = INFO 11:12:43 kafka | metrics.sample.window.ms = 30000 11:12:43 kafka | min.insync.replicas = 1 11:12:43 kafka | node.id = 1 11:12:43 kafka | num.io.threads = 8 11:12:43 kafka | num.network.threads = 3 11:12:43 kafka | num.partitions = 1 11:12:43 kafka | num.recovery.threads.per.data.dir = 1 11:12:43 kafka | num.replica.alter.log.dirs.threads = null 11:12:43 kafka | num.replica.fetchers = 1 11:12:43 kafka | offset.metadata.max.bytes = 4096 11:12:43 kafka | offsets.commit.required.acks = -1 11:12:43 kafka | offsets.commit.timeout.ms = 5000 11:12:43 kafka | offsets.load.buffer.size = 5242880 11:12:43 kafka | offsets.retention.check.interval.ms = 600000 11:12:43 kafka | offsets.retention.minutes = 10080 11:12:43 kafka | offsets.topic.compression.codec = 0 11:12:43 kafka | offsets.topic.num.partitions = 50 11:12:43 kafka | offsets.topic.replication.factor = 1 11:12:43 kafka | offsets.topic.segment.bytes = 104857600 11:12:43 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 11:12:43 kafka | password.encoder.iterations = 4096 11:12:43 kafka | password.encoder.key.length = 128 11:12:43 kafka | password.encoder.keyfactory.algorithm = null 11:12:43 kafka | password.encoder.old.secret = null 11:12:43 kafka | password.encoder.secret = null 11:12:43 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 11:12:43 kafka | process.roles = [] 11:12:43 kafka | producer.id.expiration.check.interval.ms = 600000 11:12:43 kafka | producer.id.expiration.ms = 86400000 11:12:43 kafka | producer.purgatory.purge.interval.requests = 1000 11:12:43 kafka | queued.max.request.bytes = -1 11:12:43 kafka | queued.max.requests = 500 11:12:43 kafka | quota.window.num = 11 11:12:43 kafka | quota.window.size.seconds = 1 11:12:43 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 11:12:43 kafka | remote.log.manager.task.interval.ms = 30000 11:12:43 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 11:12:43 kafka | remote.log.manager.task.retry.backoff.ms = 500 11:12:43 kafka | remote.log.manager.task.retry.jitter = 0.2 11:12:43 kafka | remote.log.manager.thread.pool.size = 10 11:12:43 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 11:12:43 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 11:12:43 kafka | remote.log.metadata.manager.class.path = null 11:12:43 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 11:12:43 kafka | remote.log.metadata.manager.listener.name = null 11:12:43 kafka | remote.log.reader.max.pending.tasks = 100 11:12:43 kafka | remote.log.reader.threads = 10 11:12:43 kafka | remote.log.storage.manager.class.name = null 11:12:43 kafka | remote.log.storage.manager.class.path = null 11:12:43 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 11:12:43 kafka | remote.log.storage.system.enable = false 11:12:43 kafka | replica.fetch.backoff.ms = 1000 11:12:43 kafka | replica.fetch.max.bytes = 1048576 11:12:43 kafka | replica.fetch.min.bytes = 1 11:12:43 kafka | replica.fetch.response.max.bytes = 10485760 11:12:43 kafka | replica.fetch.wait.max.ms = 500 11:12:43 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 11:12:43 kafka | replica.lag.time.max.ms = 30000 11:12:43 kafka | replica.selector.class = null 11:12:43 kafka | replica.socket.receive.buffer.bytes = 65536 11:12:43 kafka | replica.socket.timeout.ms = 30000 11:12:43 kafka | replication.quota.window.num = 11 11:12:43 kafka | replication.quota.window.size.seconds = 1 11:12:43 kafka | request.timeout.ms = 30000 11:12:43 kafka | reserved.broker.max.id = 1000 11:12:43 kafka | sasl.client.callback.handler.class = null 11:12:43 kafka | sasl.enabled.mechanisms = [GSSAPI] 11:12:43 policy-apex-pdp | reconnect.backoff.max.ms = 1000 11:12:43 policy-apex-pdp | reconnect.backoff.ms = 50 11:12:43 policy-apex-pdp | request.timeout.ms = 30000 11:12:43 policy-apex-pdp | retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.client.callback.handler.class = null 11:12:43 policy-apex-pdp | sasl.jaas.config = null 11:12:43 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-apex-pdp | sasl.kerberos.service.name = null 11:12:43 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-apex-pdp | sasl.login.callback.handler.class = null 11:12:43 policy-apex-pdp | sasl.login.class = null 11:12:43 policy-apex-pdp | sasl.login.connect.timeout.ms = null 11:12:43 policy-apex-pdp | sasl.login.read.timeout.ms = null 11:12:43 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.mechanism = GSSAPI 11:12:43 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 11:12:43 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 11:12:43 policy-apex-pdp | security.protocol = PLAINTEXT 11:12:43 policy-apex-pdp | security.providers = null 11:12:43 policy-apex-pdp | send.buffer.bytes = 131072 11:12:43 policy-apex-pdp | session.timeout.ms = 45000 11:12:43 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-apex-pdp | ssl.cipher.suites = null 11:12:43 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 11:12:43 policy-apex-pdp | ssl.engine.factory.class = null 11:12:43 policy-apex-pdp | ssl.key.password = null 11:12:43 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 11:12:43 policy-apex-pdp | ssl.keystore.certificate.chain = null 11:12:43 policy-apex-pdp | ssl.keystore.key = null 11:12:43 policy-apex-pdp | ssl.keystore.location = null 11:12:43 policy-apex-pdp | ssl.keystore.password = null 11:12:43 policy-apex-pdp | ssl.keystore.type = JKS 11:12:43 policy-apex-pdp | ssl.protocol = TLSv1.3 11:12:43 policy-apex-pdp | ssl.provider = null 11:12:43 policy-apex-pdp | ssl.secure.random.implementation = null 11:12:43 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-apex-pdp | ssl.truststore.certificates = null 11:12:43 policy-apex-pdp | ssl.truststore.location = null 11:12:43 policy-apex-pdp | ssl.truststore.password = null 11:12:43 policy-apex-pdp | ssl.truststore.type = JKS 11:12:43 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-apex-pdp | 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.641+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.641+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.641+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043446639 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.644+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-1, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Subscribed to topic(s): policy-pdp-pap 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.656+00:00|INFO|ServiceManager|main] service manager starting 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.657+00:00|INFO|ServiceManager|main] service manager starting topics 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.659+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=51f148c3-bcf8-4571-938a-66df08a6d568, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.678+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:12:43 policy-apex-pdp | allow.auto.create.topics = true 11:12:43 policy-apex-pdp | auto.commit.interval.ms = 5000 11:12:43 policy-apex-pdp | auto.include.jmx.reporter = true 11:12:43 policy-apex-pdp | auto.offset.reset = latest 11:12:43 policy-apex-pdp | bootstrap.servers = [kafka:9092] 11:12:43 policy-apex-pdp | check.crcs = true 11:12:43 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 11:12:43 policy-apex-pdp | client.id = consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2 11:12:43 kafka | sasl.jaas.config = null 11:12:43 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 kafka | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 11:12:43 kafka | sasl.kerberos.service.name = null 11:12:43 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 kafka | sasl.login.callback.handler.class = null 11:12:43 kafka | sasl.login.class = null 11:12:43 kafka | sasl.login.connect.timeout.ms = null 11:12:43 kafka | sasl.login.read.timeout.ms = null 11:12:43 policy-db-migrator | Waiting for mariadb port 3306... 11:12:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 11:12:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 11:12:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 11:12:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 11:12:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 11:12:43 policy-db-migrator | nc: connect to mariadb (172.17.0.5) port 3306 (tcp) failed: Connection refused 11:12:43 policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! 11:12:43 policy-db-migrator | 321 blocks 11:12:43 policy-db-migrator | Preparing upgrade release version: 0800 11:12:43 policy-db-migrator | Preparing upgrade release version: 0900 11:12:43 policy-db-migrator | Preparing upgrade release version: 1000 11:12:43 policy-db-migrator | Preparing upgrade release version: 1100 11:12:43 policy-db-migrator | Preparing upgrade release version: 1200 11:12:43 policy-db-migrator | Preparing upgrade release version: 1300 11:12:43 policy-db-migrator | Done 11:12:43 policy-db-migrator | name version 11:12:43 policy-db-migrator | policyadmin 0 11:12:43 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 11:12:43 policy-db-migrator | upgrade: 0 -> 1300 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 kafka | sasl.login.refresh.buffer.seconds = 300 11:12:43 kafka | sasl.login.refresh.min.period.seconds = 60 11:12:43 kafka | sasl.login.refresh.window.factor = 0.8 11:12:43 kafka | sasl.login.refresh.window.jitter = 0.05 11:12:43 kafka | sasl.login.retry.backoff.max.ms = 10000 11:12:43 kafka | sasl.login.retry.backoff.ms = 100 11:12:43 kafka | sasl.mechanism.controller.protocol = GSSAPI 11:12:43 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 11:12:43 kafka | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 kafka | sasl.oauthbearer.expected.audience = null 11:12:43 kafka | sasl.oauthbearer.expected.issuer = null 11:12:43 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 kafka | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 kafka | sasl.oauthbearer.scope.claim.name = scope 11:12:43 kafka | sasl.oauthbearer.sub.claim.name = sub 11:12:43 kafka | sasl.oauthbearer.token.endpoint.url = null 11:12:43 kafka | sasl.server.callback.handler.class = null 11:12:43 kafka | sasl.server.max.receive.size = 524288 11:12:43 kafka | security.inter.broker.protocol = PLAINTEXT 11:12:43 kafka | security.providers = null 11:12:43 kafka | server.max.startup.time.ms = 9223372036854775807 11:12:43 kafka | socket.connection.setup.timeout.max.ms = 30000 11:12:43 kafka | socket.connection.setup.timeout.ms = 10000 11:12:43 kafka | socket.listen.backlog.size = 50 11:12:43 kafka | socket.receive.buffer.bytes = 102400 11:12:43 kafka | socket.request.max.bytes = 104857600 11:12:43 kafka | socket.send.buffer.bytes = 102400 11:12:43 kafka | ssl.cipher.suites = [] 11:12:43 kafka | ssl.client.auth = none 11:12:43 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 kafka | ssl.endpoint.identification.algorithm = https 11:12:43 kafka | ssl.engine.factory.class = null 11:12:43 kafka | ssl.key.password = null 11:12:43 kafka | ssl.keymanager.algorithm = SunX509 11:12:43 kafka | ssl.keystore.certificate.chain = null 11:12:43 kafka | ssl.keystore.key = null 11:12:43 kafka | ssl.keystore.location = null 11:12:43 kafka | ssl.keystore.password = null 11:12:43 kafka | ssl.keystore.type = JKS 11:12:43 kafka | ssl.principal.mapping.rules = DEFAULT 11:12:43 kafka | ssl.protocol = TLSv1.3 11:12:43 kafka | ssl.provider = null 11:12:43 kafka | ssl.secure.random.implementation = null 11:12:43 kafka | ssl.trustmanager.algorithm = PKIX 11:12:43 kafka | ssl.truststore.certificates = null 11:12:43 kafka | ssl.truststore.location = null 11:12:43 kafka | ssl.truststore.password = null 11:12:43 kafka | ssl.truststore.type = JKS 11:12:43 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 11:12:43 kafka | transaction.max.timeout.ms = 900000 11:12:43 kafka | transaction.partition.verification.enable = true 11:12:43 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 11:12:43 kafka | transaction.state.log.load.buffer.size = 5242880 11:12:43 kafka | transaction.state.log.min.isr = 2 11:12:43 kafka | transaction.state.log.num.partitions = 50 11:12:43 kafka | transaction.state.log.replication.factor = 3 11:12:43 kafka | transaction.state.log.segment.bytes = 104857600 11:12:43 kafka | transactional.id.expiration.ms = 604800000 11:12:43 kafka | unclean.leader.election.enable = false 11:12:43 kafka | unstable.api.versions.enable = false 11:12:43 kafka | zookeeper.clientCnxnSocket = null 11:12:43 kafka | zookeeper.connect = zookeeper:2181 11:12:43 kafka | zookeeper.connection.timeout.ms = null 11:12:43 kafka | zookeeper.max.in.flight.requests = 10 11:12:43 kafka | zookeeper.metadata.migration.enable = false 11:12:43 kafka | zookeeper.metadata.migration.min.batch.size = 200 11:12:43 kafka | zookeeper.session.timeout.ms = 18000 11:12:43 kafka | zookeeper.set.acl = false 11:12:43 kafka | zookeeper.ssl.cipher.suites = null 11:12:43 kafka | zookeeper.ssl.client.enable = false 11:12:43 kafka | zookeeper.ssl.crl.enable = false 11:12:43 policy-apex-pdp | client.rack = 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 11:12:43 policy-apex-pdp | connections.max.idle.ms = 540000 11:12:43 policy-apex-pdp | default.api.timeout.ms = 60000 11:12:43 policy-apex-pdp | enable.auto.commit = true 11:12:43 policy-apex-pdp | exclude.internal.topics = true 11:12:43 policy-apex-pdp | fetch.max.bytes = 52428800 11:12:43 policy-apex-pdp | fetch.max.wait.ms = 500 11:12:43 policy-apex-pdp | fetch.min.bytes = 1 11:12:43 policy-apex-pdp | group.id = 51f148c3-bcf8-4571-938a-66df08a6d568 11:12:43 policy-apex-pdp | group.instance.id = null 11:12:43 policy-apex-pdp | heartbeat.interval.ms = 3000 11:12:43 policy-apex-pdp | interceptor.classes = [] 11:12:43 policy-apex-pdp | internal.leave.group.on.close = true 11:12:43 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 11:12:43 policy-apex-pdp | isolation.level = read_uncommitted 11:12:43 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-apex-pdp | max.partition.fetch.bytes = 1048576 11:12:43 policy-apex-pdp | max.poll.interval.ms = 300000 11:12:43 policy-apex-pdp | max.poll.records = 500 11:12:43 policy-apex-pdp | metadata.max.age.ms = 300000 11:12:43 policy-apex-pdp | metric.reporters = [] 11:12:43 policy-apex-pdp | metrics.num.samples = 2 11:12:43 policy-apex-pdp | metrics.recording.level = INFO 11:12:43 policy-apex-pdp | metrics.sample.window.ms = 30000 11:12:43 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:12:43 policy-apex-pdp | receive.buffer.bytes = 65536 11:12:43 policy-apex-pdp | reconnect.backoff.max.ms = 1000 11:12:43 policy-apex-pdp | reconnect.backoff.ms = 50 11:12:43 policy-apex-pdp | request.timeout.ms = 30000 11:12:43 policy-apex-pdp | retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.client.callback.handler.class = null 11:12:43 policy-apex-pdp | sasl.jaas.config = null 11:12:43 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-apex-pdp | sasl.kerberos.service.name = null 11:12:43 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-apex-pdp | sasl.login.callback.handler.class = null 11:12:43 policy-apex-pdp | sasl.login.class = null 11:12:43 policy-apex-pdp | sasl.login.connect.timeout.ms = null 11:12:43 policy-apex-pdp | sasl.login.read.timeout.ms = null 11:12:43 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.mechanism = GSSAPI 11:12:43 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 11:12:43 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 11:12:43 policy-apex-pdp | security.protocol = PLAINTEXT 11:12:43 policy-apex-pdp | security.providers = null 11:12:43 policy-apex-pdp | send.buffer.bytes = 131072 11:12:43 policy-apex-pdp | session.timeout.ms = 45000 11:12:43 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-apex-pdp | ssl.cipher.suites = null 11:12:43 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 11:12:43 policy-apex-pdp | ssl.engine.factory.class = null 11:12:43 policy-apex-pdp | ssl.key.password = null 11:12:43 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 11:12:43 policy-apex-pdp | ssl.keystore.certificate.chain = null 11:12:43 policy-apex-pdp | ssl.keystore.key = null 11:12:43 policy-apex-pdp | ssl.keystore.location = null 11:12:43 policy-apex-pdp | ssl.keystore.password = null 11:12:43 policy-apex-pdp | ssl.keystore.type = JKS 11:12:43 policy-apex-pdp | ssl.protocol = TLSv1.3 11:12:43 policy-apex-pdp | ssl.provider = null 11:12:43 policy-apex-pdp | ssl.secure.random.implementation = null 11:12:43 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-apex-pdp | ssl.truststore.certificates = null 11:12:43 policy-apex-pdp | ssl.truststore.location = null 11:12:43 policy-apex-pdp | ssl.truststore.password = null 11:12:43 policy-apex-pdp | ssl.truststore.type = JKS 11:12:43 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-apex-pdp | 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.687+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.687+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.687+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043446687 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.688+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Subscribed to topic(s): policy-pdp-pap 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.688+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=fb1a8d96-d50b-4ee5-a184-aa1186f7c213, alive=false, publisher=null]]: starting 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.700+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:12:43 policy-apex-pdp | acks = -1 11:12:43 policy-apex-pdp | auto.include.jmx.reporter = true 11:12:43 policy-apex-pdp | batch.size = 16384 11:12:43 policy-apex-pdp | bootstrap.servers = [kafka:9092] 11:12:43 policy-apex-pdp | buffer.memory = 33554432 11:12:43 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 11:12:43 policy-apex-pdp | client.id = producer-1 11:12:43 policy-apex-pdp | compression.type = none 11:12:43 policy-apex-pdp | connections.max.idle.ms = 540000 11:12:43 policy-apex-pdp | delivery.timeout.ms = 120000 11:12:43 policy-apex-pdp | enable.idempotence = true 11:12:43 policy-apex-pdp | interceptor.classes = [] 11:12:43 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:12:43 policy-apex-pdp | linger.ms = 0 11:12:43 policy-apex-pdp | max.block.ms = 60000 11:12:43 policy-apex-pdp | max.in.flight.requests.per.connection = 5 11:12:43 policy-apex-pdp | max.request.size = 1048576 11:12:43 policy-apex-pdp | metadata.max.age.ms = 300000 11:12:43 policy-apex-pdp | metadata.max.idle.ms = 300000 11:12:43 policy-apex-pdp | metric.reporters = [] 11:12:43 policy-apex-pdp | metrics.num.samples = 2 11:12:43 policy-apex-pdp | metrics.recording.level = INFO 11:12:43 policy-apex-pdp | metrics.sample.window.ms = 30000 11:12:43 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 11:12:43 policy-apex-pdp | partitioner.availability.timeout.ms = 0 11:12:43 policy-apex-pdp | partitioner.class = null 11:12:43 policy-apex-pdp | partitioner.ignore.keys = false 11:12:43 policy-apex-pdp | receive.buffer.bytes = 32768 11:12:43 policy-apex-pdp | reconnect.backoff.max.ms = 1000 11:12:43 policy-apex-pdp | reconnect.backoff.ms = 50 11:12:43 policy-apex-pdp | request.timeout.ms = 30000 11:12:43 policy-apex-pdp | retries = 2147483647 11:12:43 policy-apex-pdp | retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.client.callback.handler.class = null 11:12:43 policy-apex-pdp | sasl.jaas.config = null 11:12:43 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-apex-pdp | sasl.kerberos.service.name = null 11:12:43 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-apex-pdp | sasl.login.callback.handler.class = null 11:12:43 policy-apex-pdp | sasl.login.class = null 11:12:43 policy-apex-pdp | sasl.login.connect.timeout.ms = null 11:12:43 policy-apex-pdp | sasl.login.read.timeout.ms = null 11:12:43 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.mechanism = GSSAPI 11:12:43 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 11:12:43 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.372934786Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-25T11:10:00Z 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373275709Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.37332025Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.37335019Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.37338069Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.37340551Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373438341Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373470481Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373511311Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373564742Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373607292Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373638233Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373670263Z level=info msg=Target target=[all] 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373723334Z level=info msg="Path Home" path=/usr/share/grafana 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373764754Z level=info msg="Path Data" path=/var/lib/grafana 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373799404Z level=info msg="Path Logs" path=/var/log/grafana 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373825285Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373869765Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 11:12:43 grafana | logger=settings t=2024-04-25T11:10:00.373920935Z level=info msg="App mode production" 11:12:43 grafana | logger=sqlstore t=2024-04-25T11:10:00.374275879Z level=info msg="Connecting to DB" dbtype=sqlite3 11:12:43 grafana | logger=sqlstore t=2024-04-25T11:10:00.374330789Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.375050897Z level=info msg="Starting DB migrations" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.375969896Z level=info msg="Executing migration" id="create migration_log table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.376862885Z level=info msg="Migration successfully executed" id="create migration_log table" duration=892.679µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.380481931Z level=info msg="Executing migration" id="create user table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.381123867Z level=info msg="Migration successfully executed" id="create user table" duration=641.466µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.387928076Z level=info msg="Executing migration" id="add unique index user.login" 11:12:43 kafka | zookeeper.ssl.enabled.protocols = null 11:12:43 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 11:12:43 kafka | zookeeper.ssl.keystore.location = null 11:12:43 kafka | zookeeper.ssl.keystore.password = null 11:12:43 kafka | zookeeper.ssl.keystore.type = null 11:12:43 kafka | zookeeper.ssl.ocsp.enable = false 11:12:43 kafka | zookeeper.ssl.protocol = TLSv1.2 11:12:43 kafka | zookeeper.ssl.truststore.location = null 11:12:43 kafka | zookeeper.ssl.truststore.password = null 11:12:43 kafka | zookeeper.ssl.truststore.type = null 11:12:43 kafka | (kafka.server.KafkaConfig) 11:12:43 kafka | [2024-04-25 11:10:04,856] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:12:43 kafka | [2024-04-25 11:10:04,857] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:12:43 kafka | [2024-04-25 11:10:04,858] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:12:43 kafka | [2024-04-25 11:10:04,861] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:12:43 kafka | [2024-04-25 11:10:04,896] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:04,904] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:04,913] INFO Loaded 0 logs in 17ms (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:04,915] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:04,916] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:04,927] INFO Starting the log cleaner (kafka.log.LogCleaner) 11:12:43 kafka | [2024-04-25 11:10:05,006] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 11:12:43 kafka | [2024-04-25 11:10:05,047] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 11:12:43 kafka | [2024-04-25 11:10:05,073] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 11:12:43 kafka | [2024-04-25 11:10:05,131] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 11:12:43 kafka | [2024-04-25 11:10:05,567] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 11:12:43 kafka | [2024-04-25 11:10:05,589] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 11:12:43 kafka | [2024-04-25 11:10:05,589] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 11:12:43 kafka | [2024-04-25 11:10:05,595] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 11:12:43 kafka | [2024-04-25 11:10:05,600] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 11:12:43 kafka | [2024-04-25 11:10:05,637] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:05,641] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:05,642] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:05,644] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:05,647] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:05,681] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 11:12:43 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 11:12:43 simulator | overriding logback.xml 11:12:43 simulator | 2024-04-25 11:10:01,951 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 11:12:43 simulator | 2024-04-25 11:10:02,037 INFO org.onap.policy.models.simulators starting 11:12:43 simulator | 2024-04-25 11:10:02,037 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 11:12:43 simulator | 2024-04-25 11:10:02,246 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 11:12:43 simulator | 2024-04-25 11:10:02,247 INFO org.onap.policy.models.simulators starting A&AI simulator 11:12:43 simulator | 2024-04-25 11:10:02,364 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 11:12:43 simulator | 2024-04-25 11:10:02,376 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 simulator | 2024-04-25 11:10:02,380 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 simulator | 2024-04-25 11:10:02,390 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 11:12:43 simulator | 2024-04-25 11:10:02,464 INFO Session workerName=node0 11:12:43 simulator | 2024-04-25 11:10:03,109 INFO Using GSON for REST calls 11:12:43 simulator | 2024-04-25 11:10:03,247 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 11:12:43 simulator | 2024-04-25 11:10:03,258 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 11:12:43 simulator | 2024-04-25 11:10:03,268 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1884ms 11:12:43 simulator | 2024-04-25 11:10:03,269 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4110 ms. 11:12:43 simulator | 2024-04-25 11:10:03,278 INFO org.onap.policy.models.simulators starting SDNC simulator 11:12:43 simulator | 2024-04-25 11:10:03,281 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 11:12:43 simulator | 2024-04-25 11:10:03,281 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 simulator | 2024-04-25 11:10:03,282 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 simulator | 2024-04-25 11:10:03,282 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 11:12:43 simulator | 2024-04-25 11:10:03,315 INFO Session workerName=node0 11:12:43 simulator | 2024-04-25 11:10:03,415 INFO Using GSON for REST calls 11:12:43 simulator | 2024-04-25 11:10:03,426 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 11:12:43 simulator | 2024-04-25 11:10:03,438 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 11:12:43 simulator | 2024-04-25 11:10:03,439 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @2055ms 11:12:43 simulator | 2024-04-25 11:10:03,439 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4842 ms. 11:12:43 simulator | 2024-04-25 11:10:03,441 INFO org.onap.policy.models.simulators starting SO simulator 11:12:43 simulator | 2024-04-25 11:10:03,446 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 11:12:43 simulator | 2024-04-25 11:10:03,447 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 simulator | 2024-04-25 11:10:03,448 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 simulator | 2024-04-25 11:10:03,450 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 11:12:43 simulator | 2024-04-25 11:10:03,453 INFO Session workerName=node0 11:12:43 simulator | 2024-04-25 11:10:03,535 INFO Using GSON for REST calls 11:12:43 simulator | 2024-04-25 11:10:03,563 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 11:12:43 simulator | 2024-04-25 11:10:03,572 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 11:12:43 simulator | 2024-04-25 11:10:03,573 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @2189ms 11:12:43 simulator | 2024-04-25 11:10:03,573 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4875 ms. 11:12:43 simulator | 2024-04-25 11:10:03,574 INFO org.onap.policy.models.simulators starting VFC simulator 11:12:43 simulator | 2024-04-25 11:10:03,577 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 11:12:43 simulator | 2024-04-25 11:10:03,577 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 simulator | 2024-04-25 11:10:03,578 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 simulator | 2024-04-25 11:10:03,578 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 11:12:43 simulator | 2024-04-25 11:10:03,581 INFO Session workerName=node0 11:12:43 simulator | 2024-04-25 11:10:03,631 INFO Using GSON for REST calls 11:12:43 simulator | 2024-04-25 11:10:03,640 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 11:12:43 simulator | 2024-04-25 11:10:03,643 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 11:12:43 simulator | 2024-04-25 11:10:03,643 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2259ms 11:12:43 simulator | 2024-04-25 11:10:03,643 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4935 ms. 11:12:43 simulator | 2024-04-25 11:10:03,644 INFO org.onap.policy.models.simulators started 11:12:43 mariadb | 11:12:43 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 11:12:43 mariadb | 11:12:43 mariadb | Please report any problems at https://mariadb.org/jira 11:12:43 mariadb | 11:12:43 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 11:12:43 mariadb | 11:12:43 mariadb | Consider joining MariaDB's strong and vibrant community: 11:12:43 mariadb | https://mariadb.org/get-involved/ 11:12:43 mariadb | 11:12:43 mariadb | 2024-04-25 11:10:04+00:00 [Note] [Entrypoint]: Database files initialized 11:12:43 mariadb | 2024-04-25 11:10:04+00:00 [Note] [Entrypoint]: Starting temporary server 11:12:43 mariadb | 2024-04-25 11:10:04+00:00 [Note] [Entrypoint]: Waiting for server startup 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: Number of transaction pools: 1 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: Completed initialization of buffer pool 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: 128 rollback segments are active. 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] InnoDB: log sequence number 46574; transaction id 14 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] Plugin 'FEEDBACK' is disabled. 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 11:12:43 mariadb | 2024-04-25 11:10:04 0 [Note] mariadbd: ready for connections. 11:12:43 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 11:12:43 mariadb | 2024-04-25 11:10:05+00:00 [Note] [Entrypoint]: Temporary server started. 11:12:43 mariadb | 2024-04-25 11:10:07+00:00 [Note] [Entrypoint]: Creating user policy_user 11:12:43 mariadb | 2024-04-25 11:10:07+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 11:12:43 mariadb | 11:12:43 mariadb | 2024-04-25 11:10:07+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 11:12:43 mariadb | 11:12:43 mariadb | 2024-04-25 11:10:07+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 11:12:43 mariadb | #!/bin/bash -xv 11:12:43 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 11:12:43 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 11:12:43 mariadb | # 11:12:43 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 11:12:43 mariadb | # you may not use this file except in compliance with the License. 11:12:43 mariadb | # You may obtain a copy of the License at 11:12:43 mariadb | # 11:12:43 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 11:12:43 mariadb | # 11:12:43 mariadb | # Unless required by applicable law or agreed to in writing, software 11:12:43 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 11:12:43 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11:12:43 mariadb | # See the License for the specific language governing permissions and 11:12:43 mariadb | # limitations under the License. 11:12:43 mariadb | 11:12:43 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:12:43 mariadb | do 11:12:43 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 11:12:43 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 11:12:43 mariadb | done 11:12:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:12:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 11:12:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:12:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:12:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 11:12:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:12:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:12:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 11:12:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:12:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:12:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 11:12:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:12:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:12:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 11:12:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:12:43 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:12:43 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 11:12:43 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:12:43 mariadb | 11:12:43 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 11:12:43 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 11:12:43 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 11:12:43 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 11:12:43 mariadb | 11:12:43 mariadb | 2024-04-25 11:10:08+00:00 [Note] [Entrypoint]: Stopping temporary server 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: FTS optimize thread exiting. 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Starting shutdown... 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Buffer pool(s) dump completed at 240425 11:10:08 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Shutdown completed; log sequence number 328179; transaction id 298 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] mariadbd: Shutdown complete 11:12:43 mariadb | 11:12:43 mariadb | 2024-04-25 11:10:08+00:00 [Note] [Entrypoint]: Temporary server stopped 11:12:43 mariadb | 11:12:43 mariadb | 2024-04-25 11:10:08+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 11:12:43 mariadb | 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Number of transaction pools: 1 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 11:12:43 mariadb | 2024-04-25 11:10:08 0 [Note] InnoDB: Completed initialization of buffer pool 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] InnoDB: 128 rollback segments are active. 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] InnoDB: log sequence number 328179; transaction id 299 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] Plugin 'FEEDBACK' is disabled. 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] Server socket created on IP: '0.0.0.0'. 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] Server socket created on IP: '::'. 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] mariadbd: ready for connections. 11:12:43 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 11:12:43 mariadb | 2024-04-25 11:10:09 0 [Note] InnoDB: Buffer pool(s) load completed at 240425 11:10:09 11:12:43 mariadb | 2024-04-25 11:10:09 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 11:12:43 mariadb | 2024-04-25 11:10:09 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 11:12:43 mariadb | 2024-04-25 11:10:09 18 [Warning] Aborted connection 18 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 11:12:43 mariadb | 2024-04-25 11:10:10 35 [Warning] Aborted connection 35 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0450-pdpgroup.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0470-pdp.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-apex-pdp | security.protocol = PLAINTEXT 11:12:43 policy-apex-pdp | security.providers = null 11:12:43 policy-apex-pdp | send.buffer.bytes = 131072 11:12:43 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-apex-pdp | ssl.cipher.suites = null 11:12:43 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 11:12:43 policy-apex-pdp | ssl.engine.factory.class = null 11:12:43 policy-apex-pdp | ssl.key.password = null 11:12:43 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 11:12:43 policy-apex-pdp | ssl.keystore.certificate.chain = null 11:12:43 policy-apex-pdp | ssl.keystore.key = null 11:12:43 policy-apex-pdp | ssl.keystore.location = null 11:12:43 policy-apex-pdp | ssl.keystore.password = null 11:12:43 policy-apex-pdp | ssl.keystore.type = JKS 11:12:43 policy-apex-pdp | ssl.protocol = TLSv1.3 11:12:43 policy-apex-pdp | ssl.provider = null 11:12:43 policy-apex-pdp | ssl.secure.random.implementation = null 11:12:43 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-apex-pdp | ssl.truststore.certificates = null 11:12:43 policy-apex-pdp | ssl.truststore.location = null 11:12:43 policy-apex-pdp | ssl.truststore.password = null 11:12:43 policy-apex-pdp | ssl.truststore.type = JKS 11:12:43 policy-apex-pdp | transaction.timeout.ms = 60000 11:12:43 policy-apex-pdp | transactional.id = null 11:12:43 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:12:43 policy-apex-pdp | 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.710+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.725+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.725+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.725+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043446725 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.726+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=fb1a8d96-d50b-4ee5-a184-aa1186f7c213, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.726+00:00|INFO|ServiceManager|main] service manager starting set alive 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.726+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.728+00:00|INFO|ServiceManager|main] service manager starting topic sinks 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.728+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.730+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.730+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.730+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.730+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=51f148c3-bcf8-4571-938a-66df08a6d568, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.730+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=51f148c3-bcf8-4571-938a-66df08a6d568, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.730+00:00|INFO|ServiceManager|main] service manager starting Create REST server 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.389255739Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.328033ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.392691435Z level=info msg="Executing migration" id="add unique index user.email" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.393927807Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.236542ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.397006458Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.397704605Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=698.397µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.400629454Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.401332082Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=701.068µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.406059879Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.408366782Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.309113ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.41115025Z level=info msg="Executing migration" id="create user table v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.412540285Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.389645ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.415753847Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.416538985Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=784.688µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.422486155Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.423669977Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.188902ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.42698717Z level=info msg="Executing migration" id="copy data_source v1 to v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.427656417Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=669.477µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.43097564Z level=info msg="Executing migration" id="Drop old table user_v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.431511266Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=535.226µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.438508646Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.440356116Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.85478ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.443476867Z level=info msg="Executing migration" id="Update user table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.443621298Z level=info msg="Migration successfully executed" id="Update user table charset" duration=144.051µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.446921641Z level=info msg="Executing migration" id="Add last_seen_at column to user" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.448017832Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.095701ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.4507895Z level=info msg="Executing migration" id="Add missing user data" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.451653399Z level=info msg="Migration successfully executed" id="Add missing user data" duration=863.609µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.457473728Z level=info msg="Executing migration" id="Add is_disabled column to user" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.458639849Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.165721ms 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.460582079Z level=info msg="Executing migration" id="Add index user.login/user.email" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.461372237Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=790.078µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.464182975Z level=info msg="Executing migration" id="Add is_service_account column to user" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.465444148Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.254823ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.468264087Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.476611641Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.346794ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.525567644Z level=info msg="Executing migration" id="Add uid column to user" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.526522764Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=952.599µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.529522404Z level=info msg="Executing migration" id="Update uid column values for users" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.529713946Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=191.322µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.532519024Z level=info msg="Executing migration" id="Add unique index user_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.533272182Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=753.128µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.535921309Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.536167571Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=245.992µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.541561896Z level=info msg="Executing migration" id="create temp user table v1-7" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.542158381Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=596.155µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.544799988Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.545373154Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=572.926µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.548165862Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.548780908Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=616.026µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.55393152Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.554698778Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=767.158µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.557499596Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.558264994Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=765.338µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.560982891Z level=info msg="Executing migration" id="Update temp_user table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.561054012Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=71.721µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.566163843Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.566878011Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=714.148µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.56973736Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.570453977Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=716.707µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.573980082Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.57469762Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=717.608µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.579880391Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.580620469Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=740.178µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.583117755Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.586241256Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.122891ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.590118085Z level=info msg="Executing migration" id="create temp_user v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.590987183Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=864.948µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.595955533Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.596790612Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=835.089µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.599551Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.600354968Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=803.738µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.603087786Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.603835183Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=749.287µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.60945202Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.61051588Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.06252ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.614086957Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.614774534Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=687.197µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.618027896Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.618861144Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=832.388µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.621753923Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0570-toscadatatype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0630-toscanodetype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0660-toscaparameter.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.622176098Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=422.075µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.628122538Z level=info msg="Executing migration" id="create star table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.628831685Z level=info msg="Migration successfully executed" id="create star table" duration=708.697µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.631436341Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.632203259Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=766.758µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.63531027Z level=info msg="Executing migration" id="create org table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.636107148Z level=info msg="Migration successfully executed" id="create org table v1" duration=790.268µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.641662224Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.642462103Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=799.839µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.645395482Z level=info msg="Executing migration" id="create org_user table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.64622592Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=829.108µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.649387272Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.650275582Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=888.08µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.653908268Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.655458603Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.553275ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.666878358Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.668243703Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.365085ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.671660117Z level=info msg="Executing migration" id="Update org table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.671816508Z level=info msg="Migration successfully executed" id="Update org table charset" duration=154.111µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.675138362Z level=info msg="Executing migration" id="Update org_user table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.675202672Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=64.88µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.67797081Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.678213422Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=242.272µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.684002652Z level=info msg="Executing migration" id="create dashboard table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.685485476Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.485084ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.688595287Z level=info msg="Executing migration" id="add index dashboard.account_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.690092823Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.490146ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.693352976Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.694280985Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=927.159µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.697156644Z level=info msg="Executing migration" id="create dashboard_tag table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.697898242Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=742.898µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.703437948Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.704285526Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=847.568µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.706924963Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.70772102Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=795.847µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.710492008Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.716122425Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.630067ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.721394428Z level=info msg="Executing migration" id="create dashboard v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.722221126Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=827.398µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.724953974Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.725792712Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=838.868µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.728907494Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.729820653Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=906.209µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.733848363Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.734252808Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=403.885µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.736849044Z level=info msg="Executing migration" id="drop table dashboard_v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.737740223Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=891.2µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.741677613Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.741815624Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=137.871µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.744262589Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.746460401Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.193373ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.749740084Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.751545913Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.805859ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.754680184Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.756596193Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.909399ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.761178479Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.762018117Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=844.078µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.76513662Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.766991138Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.854178ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.770082629Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.770936038Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=853.379µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.775083809Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.775898017Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=813.138µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.779251112Z level=info msg="Executing migration" id="Update dashboard table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.779339493Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=89.081µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.782707677Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.782831898Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=123.861µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.786964249Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.789123121Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.159372ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.79204796Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.793503105Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.454765ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.796198063Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.797646907Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.448314ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.801562847Z level=info msg="Executing migration" id="Add column uid in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.805014972Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.451434ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.808145383Z level=info msg="Executing migration" id="Update uid column values in dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.808471476Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=326.413µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.811289824Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.812098602Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=808.508µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.815247264Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.816004692Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=758.638µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.819659599Z level=info msg="Executing migration" id="Update dashboard title length" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.819729749Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=70.94µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.82280582Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.824392927Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.586927ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.827565999Z level=info msg="Executing migration" id="create dashboard_provisioning" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.828414587Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=849.198µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.832396438Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.83764897Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.251902ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.840596989Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.841373138Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=772.769µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.844140356Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.845009314Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=868.928µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.850667941Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.851487419Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=819.278µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.854672711Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.855045016Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=371.545µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.857976665Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.858531531Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=559.756µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.862903415Z level=info msg="Executing migration" id="Add check_sum column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.865016266Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.112592ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.867880445Z level=info msg="Executing migration" id="Add index for dashboard_title" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.868789035Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=907.96µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.871762365Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.746+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 11:12:43 policy-apex-pdp | [] 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.748+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f96497fc-722b-4a01-b640-a01c709cdca3","timestampMs":1714043446732,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.994+00:00|INFO|ServiceManager|main] service manager starting Rest Server 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.994+00:00|INFO|ServiceManager|main] service manager starting 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.994+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 11:12:43 policy-apex-pdp | [2024-04-25T11:10:46.994+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.006+00:00|INFO|ServiceManager|main] service manager started 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.006+00:00|INFO|ServiceManager|main] service manager started 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.006+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 11:12:43 zookeeper | ===> User 11:12:43 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 11:12:43 zookeeper | ===> Configuring ... 11:12:43 zookeeper | ===> Running preflight checks ... 11:12:43 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 11:12:43 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 11:12:43 zookeeper | ===> Launching ... 11:12:43 zookeeper | ===> Launching zookeeper ... 11:12:43 zookeeper | [2024-04-25 11:10:01,049] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,058] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,058] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,058] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,058] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,060] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 11:12:43 zookeeper | [2024-04-25 11:10:01,060] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 11:12:43 zookeeper | [2024-04-25 11:10:01,060] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 11:12:43 zookeeper | [2024-04-25 11:10:01,060] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 11:12:43 zookeeper | [2024-04-25 11:10:01,061] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 11:12:43 zookeeper | [2024-04-25 11:10:01,061] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,062] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,062] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,062] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,062] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:12:43 zookeeper | [2024-04-25 11:10:01,062] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 11:12:43 zookeeper | [2024-04-25 11:10:01,074] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3246fb96 (org.apache.zookeeper.server.ServerMetrics) 11:12:43 zookeeper | [2024-04-25 11:10:01,077] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 11:12:43 zookeeper | [2024-04-25 11:10:01,077] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 11:12:43 zookeeper | [2024-04-25 11:10:01,079] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.006+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.181+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Cluster ID: hj8fcuYTRGyyshpZV-zZWg 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.181+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: hj8fcuYTRGyyshpZV-zZWg 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.182+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.182+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.188+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] (Re-)joining group 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.206+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Request joining group due to: need to re-join with the given member-id: consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2-3fcc063a-473e-44ee-8cdf-b3d14e063106 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.206+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.206+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] (Re-)joining group 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.687+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 11:12:43 policy-apex-pdp | [2024-04-25T11:10:47.689+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 11:12:43 policy-apex-pdp | [2024-04-25T11:10:50.211+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Successfully joined group with generation Generation{generationId=1, memberId='consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2-3fcc063a-473e-44ee-8cdf-b3d14e063106', protocol='range'} 11:12:43 policy-apex-pdp | [2024-04-25T11:10:50.221+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Finished assignment for group at generation 1: {consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2-3fcc063a-473e-44ee-8cdf-b3d14e063106=Assignment(partitions=[policy-pdp-pap-0])} 11:12:43 policy-apex-pdp | [2024-04-25T11:10:50.230+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Successfully synced group in generation Generation{generationId=1, memberId='consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2-3fcc063a-473e-44ee-8cdf-b3d14e063106', protocol='range'} 11:12:43 policy-apex-pdp | [2024-04-25T11:10:50.230+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:12:43 policy-apex-pdp | [2024-04-25T11:10:50.233+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Adding newly assigned partitions: policy-pdp-pap-0 11:12:43 policy-apex-pdp | [2024-04-25T11:10:50.241+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Found no committed offset for partition policy-pdp-pap-0 11:12:43 policy-apex-pdp | [2024-04-25T11:10:50.250+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2, groupId=51f148c3-bcf8-4571-938a-66df08a6d568] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:12:43 policy-apex-pdp | [2024-04-25T11:10:56.168+00:00|INFO|RequestLog|qtp1863100050-33] 172.17.0.2 - policyadmin [25/Apr/2024:11:10:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.51.2" 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.731+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9eb865b0-4084-4b3e-8f19-d2c12c23c3a1","timestampMs":1714043466730,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.760+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9eb865b0-4084-4b3e-8f19-d2c12c23c3a1","timestampMs":1714043466730,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.763+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.911+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"438732f5-c797-488c-bd92-c5c81e74dcb8","timestampMs":1714043466836,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.923+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.924+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"3f516921-659d-4732-a3f6-09167a83f38a","timestampMs":1714043466923,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.924+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"438732f5-c797-488c-bd92-c5c81e74dcb8","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"4f17d921-6005-4f56-9b0c-a5eef0ff196b","timestampMs":1714043466924,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.939+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 prometheus | ts=2024-04-25T11:09:59.378Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 11:12:43 prometheus | ts=2024-04-25T11:09:59.378Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 11:12:43 prometheus | ts=2024-04-25T11:09:59.378Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 11:12:43 prometheus | ts=2024-04-25T11:09:59.378Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 11:12:43 prometheus | ts=2024-04-25T11:09:59.378Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 11:12:43 prometheus | ts=2024-04-25T11:09:59.378Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 11:12:43 prometheus | ts=2024-04-25T11:09:59.382Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 11:12:43 prometheus | ts=2024-04-25T11:09:59.383Z caller=main.go:1129 level=info msg="Starting TSDB ..." 11:12:43 prometheus | ts=2024-04-25T11:09:59.385Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 11:12:43 prometheus | ts=2024-04-25T11:09:59.385Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 11:12:43 prometheus | ts=2024-04-25T11:09:59.387Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 11:12:43 prometheus | ts=2024-04-25T11:09:59.387Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.38µs 11:12:43 prometheus | ts=2024-04-25T11:09:59.387Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 11:12:43 prometheus | ts=2024-04-25T11:09:59.392Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 11:12:43 prometheus | ts=2024-04-25T11:09:59.392Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=26.391µs wal_replay_duration=4.205081ms wbl_replay_duration=270ns total_replay_duration=4.264032ms 11:12:43 prometheus | ts=2024-04-25T11:09:59.395Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 11:12:43 prometheus | ts=2024-04-25T11:09:59.395Z caller=main.go:1153 level=info msg="TSDB started" 11:12:43 prometheus | ts=2024-04-25T11:09:59.395Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 11:12:43 prometheus | ts=2024-04-25T11:09:59.396Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.179094ms db_storage=1.33µs remote_storage=1.77µs web_handler=400ns query_engine=1.04µs scrape=349.104µs scrape_sd=122.371µs notify=103.522µs notify_sd=11.83µs rules=1.49µs tracing=4.38µs 11:12:43 prometheus | ts=2024-04-25T11:09:59.396Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 11:12:43 prometheus | ts=2024-04-25T11:09:59.397Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 11:12:43 kafka | [2024-04-25 11:10:05,684] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 11:12:43 kafka | [2024-04-25 11:10:05,713] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 11:12:43 kafka | [2024-04-25 11:10:05,739] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714043405726,1714043405726,1,0,0,72057610662576129,258,0,27 11:12:43 kafka | (kafka.zk.KafkaZkClient) 11:12:43 kafka | [2024-04-25 11:10:05,740] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 11:12:43 kafka | [2024-04-25 11:10:05,850] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 11:12:43 kafka | [2024-04-25 11:10:05,857] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:05,862] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:05,863] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:05,883] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:05,884] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 11:12:43 kafka | [2024-04-25 11:10:05,896] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:05,899] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:05,903] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:05,907] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 11:12:43 kafka | [2024-04-25 11:10:05,925] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 11:12:43 kafka | [2024-04-25 11:10:05,930] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 11:12:43 kafka | [2024-04-25 11:10:05,931] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 11:12:43 kafka | [2024-04-25 11:10:05,950] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 11:12:43 kafka | [2024-04-25 11:10:05,951] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:05,958] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:05,963] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:05,970] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:05,988] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,004] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,007] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:12:43 kafka | [2024-04-25 11:10:06,014] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 11:12:43 kafka | [2024-04-25 11:10:06,024] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 11:12:43 kafka | [2024-04-25 11:10:06,026] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,026] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,026] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,027] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,031] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,031] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,031] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,032] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 11:12:43 kafka | [2024-04-25 11:10:06,033] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 11:12:43 policy-pap | Waiting for mariadb port 3306... 11:12:43 policy-pap | mariadb (172.17.0.5:3306) open 11:12:43 policy-pap | Waiting for kafka port 9092... 11:12:43 policy-pap | kafka (172.17.0.6:9092) open 11:12:43 policy-pap | Waiting for api port 6969... 11:12:43 policy-pap | api (172.17.0.9:6969) open 11:12:43 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 11:12:43 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 11:12:43 policy-pap | 11:12:43 policy-pap | . ____ _ __ _ _ 11:12:43 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 11:12:43 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 11:12:43 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 11:12:43 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 11:12:43 policy-pap | =========|_|==============|___/=/_/_/_/ 11:12:43 policy-pap | :: Spring Boot :: (v3.1.10) 11:12:43 policy-pap | 11:12:43 policy-pap | [2024-04-25T11:10:34.380+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 11:12:43 policy-pap | [2024-04-25T11:10:34.483+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 37 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 11:12:43 policy-pap | [2024-04-25T11:10:34.484+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 11:12:43 policy-pap | [2024-04-25T11:10:36.893+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 11:12:43 policy-pap | [2024-04-25T11:10:37.025+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 120 ms. Found 7 JPA repository interfaces. 11:12:43 policy-pap | [2024-04-25T11:10:37.520+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 11:12:43 policy-pap | [2024-04-25T11:10:37.521+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 11:12:43 policy-pap | [2024-04-25T11:10:38.252+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 11:12:43 policy-pap | [2024-04-25T11:10:38.264+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 11:12:43 policy-pap | [2024-04-25T11:10:38.267+00:00|INFO|StandardService|main] Starting service [Tomcat] 11:12:43 policy-pap | [2024-04-25T11:10:38.267+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 11:12:43 policy-pap | [2024-04-25T11:10:38.375+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 11:12:43 policy-pap | [2024-04-25T11:10:38.376+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3788 ms 11:12:43 policy-pap | [2024-04-25T11:10:38.859+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 11:12:43 policy-pap | [2024-04-25T11:10:38.925+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 11:12:43 policy-pap | [2024-04-25T11:10:39.287+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 11:12:43 policy-pap | [2024-04-25T11:10:39.398+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@72f8ae0c 11:12:43 policy-pap | [2024-04-25T11:10:39.401+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 11:12:43 policy-pap | [2024-04-25T11:10:39.432+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 11:12:43 policy-pap | [2024-04-25T11:10:41.168+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 11:12:43 policy-pap | [2024-04-25T11:10:41.180+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 11:12:43 policy-pap | [2024-04-25T11:10:41.741+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 11:12:43 policy-pap | [2024-04-25T11:10:42.238+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 11:12:43 policy-pap | [2024-04-25T11:10:42.334+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 11:12:43 policy-pap | [2024-04-25T11:10:42.599+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:12:43 policy-pap | allow.auto.create.topics = true 11:12:43 policy-pap | auto.commit.interval.ms = 5000 11:12:43 policy-pap | auto.include.jmx.reporter = true 11:12:43 policy-pap | auto.offset.reset = latest 11:12:43 policy-pap | bootstrap.servers = [kafka:9092] 11:12:43 policy-pap | check.crcs = true 11:12:43 policy-pap | client.dns.lookup = use_all_dns_ips 11:12:43 policy-pap | client.id = consumer-6f727b00-63f5-4665-9483-d1a4468f597f-1 11:12:43 policy-pap | client.rack = 11:12:43 policy-pap | connections.max.idle.ms = 540000 11:12:43 policy-pap | default.api.timeout.ms = 60000 11:12:43 policy-pap | enable.auto.commit = true 11:12:43 policy-pap | exclude.internal.topics = true 11:12:43 policy-pap | fetch.max.bytes = 52428800 11:12:43 policy-pap | fetch.max.wait.ms = 500 11:12:43 policy-pap | fetch.min.bytes = 1 11:12:43 policy-pap | group.id = 6f727b00-63f5-4665-9483-d1a4468f597f 11:12:43 policy-pap | group.instance.id = null 11:12:43 policy-pap | heartbeat.interval.ms = 3000 11:12:43 policy-pap | interceptor.classes = [] 11:12:43 policy-pap | internal.leave.group.on.close = true 11:12:43 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:12:43 policy-pap | isolation.level = read_uncommitted 11:12:43 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-pap | max.partition.fetch.bytes = 1048576 11:12:43 policy-pap | max.poll.interval.ms = 300000 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,089] INFO (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,090] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,090] INFO Server environment:host.name=2f97c4d8af81 (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,090] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,090] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,090] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"438732f5-c797-488c-bd92-c5c81e74dcb8","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"4f17d921-6005-4f56-9b0c-a5eef0ff196b","timestampMs":1714043466924,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.940+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.940+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"3f516921-659d-4732-a3f6-09167a83f38a","timestampMs":1714043466923,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.940+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.983+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d46037a2-b866-47bf-a7fa-0d41ffd427a3","timestampMs":1714043466837,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.985+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d46037a2-b866-47bf-a7fa-0d41ffd427a3","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"8709b6a1-29be-41f8-833f-68f7f04254cd","timestampMs":1714043466985,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.998+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d46037a2-b866-47bf-a7fa-0d41ffd427a3","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"8709b6a1-29be-41f8-833f-68f7f04254cd","timestampMs":1714043466985,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:06.998+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:12:43 policy-apex-pdp | [2024-04-25T11:11:07.036+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"401da80a-2685-4572-ae21-04a3b9a931b6","timestampMs":1714043467004,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:07.038+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"401da80a-2685-4572-ae21-04a3b9a931b6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"96008e0c-1a65-490b-9fb1-4e4b6c4cd46e","timestampMs":1714043467037,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:07.048+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"401da80a-2685-4572-ae21-04a3b9a931b6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"96008e0c-1a65-490b-9fb1-4e4b6c4cd46e","timestampMs":1714043467037,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-apex-pdp | [2024-04-25T11:11:07.048+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:12:43 policy-apex-pdp | [2024-04-25T11:11:56.081+00:00|INFO|RequestLog|qtp1863100050-28] 172.17.0.2 - policyadmin [25/Apr/2024:11:11:56 +0000] "GET /metrics HTTP/1.1" 200 10653 "-" "Prometheus/2.51.2" 11:12:43 kafka | [2024-04-25 11:10:06,037] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:06,039] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 11:12:43 kafka | [2024-04-25 11:10:06,047] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 11:12:43 kafka | [2024-04-25 11:10:06,048] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 11:12:43 kafka | [2024-04-25 11:10:06,052] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 11:12:43 kafka | [2024-04-25 11:10:06,053] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 11:12:43 kafka | [2024-04-25 11:10:06,054] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 11:12:43 kafka | [2024-04-25 11:10:06,055] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 11:12:43 kafka | [2024-04-25 11:10:06,058] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 11:12:43 kafka | [2024-04-25 11:10:06,058] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,061] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 11:12:43 kafka | [2024-04-25 11:10:06,061] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 11:12:43 kafka | [2024-04-25 11:10:06,065] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 11:12:43 kafka | [2024-04-25 11:10:06,068] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 11:12:43 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 11:12:43 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 11:12:43 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 11:12:43 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 11:12:43 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.872014297Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=257.262µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.876218399Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.876486001Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=267.382µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.879598123Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.880475392Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=877.319µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.883432182Z level=info msg="Executing migration" id="Add isPublic for dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.887176579Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.743497ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.890901967Z level=info msg="Executing migration" id="create data_source table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.892570424Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.665277ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.897757056Z level=info msg="Executing migration" id="add index data_source.account_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.898600865Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=843.679µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.901745046Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.902596905Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=856.509µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.905517475Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.906267252Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=748.487µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.91003018Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.910820969Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=791.008µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.913790328Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.920187413Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.397055ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.951441287Z level=info msg="Executing migration" id="create data_source table v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.953074564Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.633297ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.956726191Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.958117034Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.390803ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.961458999Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.962345877Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=886.808µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.966503299Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.967086766Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=583.027µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.970301328Z level=info msg="Executing migration" id="Add column with_credentials" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.972693872Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.391724ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.976407439Z level=info msg="Executing migration" id="Add secure json data column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:00.980618201Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.201972ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.010258971Z level=info msg="Executing migration" id="Update data_source table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.010299761Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=42.62µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.013820977Z level=info msg="Executing migration" id="Update initial version to 1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.014260723Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=440.066µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.018744583Z level=info msg="Executing migration" id="Add read_only data column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.021189415Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.444392ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.024412078Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.024679051Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=270.753µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.029001349Z level=info msg="Executing migration" id="Update json_data with nulls" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.029245342Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=243.453µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.032304122Z level=info msg="Executing migration" id="Add uid column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.034761995Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.448883ms 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,091] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,092] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 11:12:43 zookeeper | [2024-04-25 11:10:01,093] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,093] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,094] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 11:12:43 zookeeper | [2024-04-25 11:10:01,094] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 11:12:43 zookeeper | [2024-04-25 11:10:01,095] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:12:43 zookeeper | [2024-04-25 11:10:01,095] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:12:43 zookeeper | [2024-04-25 11:10:01,095] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:12:43 zookeeper | [2024-04-25 11:10:01,095] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:12:43 policy-pap | max.poll.records = 500 11:12:43 policy-pap | metadata.max.age.ms = 300000 11:12:43 policy-pap | metric.reporters = [] 11:12:43 policy-pap | metrics.num.samples = 2 11:12:43 policy-pap | metrics.recording.level = INFO 11:12:43 policy-pap | metrics.sample.window.ms = 30000 11:12:43 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:12:43 policy-pap | receive.buffer.bytes = 65536 11:12:43 policy-pap | reconnect.backoff.max.ms = 1000 11:12:43 policy-pap | reconnect.backoff.ms = 50 11:12:43 policy-pap | request.timeout.ms = 30000 11:12:43 policy-pap | retry.backoff.ms = 100 11:12:43 policy-pap | sasl.client.callback.handler.class = null 11:12:43 policy-pap | sasl.jaas.config = null 11:12:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-pap | sasl.kerberos.service.name = null 11:12:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-pap | sasl.login.callback.handler.class = null 11:12:43 policy-pap | sasl.login.class = null 11:12:43 policy-pap | sasl.login.connect.timeout.ms = null 11:12:43 policy-pap | sasl.login.read.timeout.ms = null 11:12:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-pap | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.login.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.mechanism = GSSAPI 11:12:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-pap | sasl.oauthbearer.expected.audience = null 11:12:43 zookeeper | [2024-04-25 11:10:01,095] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:12:43 zookeeper | [2024-04-25 11:10:01,095] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:12:43 zookeeper | [2024-04-25 11:10:01,097] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,098] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,098] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 11:12:43 zookeeper | [2024-04-25 11:10:01,098] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 11:12:43 zookeeper | [2024-04-25 11:10:01,098] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,118] INFO Logging initialized @535ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 11:12:43 zookeeper | [2024-04-25 11:10:01,235] WARN o.e.j.s.ServletContextHandler@311bf055{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 11:12:43 zookeeper | [2024-04-25 11:10:01,235] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 11:12:43 zookeeper | [2024-04-25 11:10:01,259] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 11:12:43 zookeeper | [2024-04-25 11:10:01,300] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 11:12:43 zookeeper | [2024-04-25 11:10:01,300] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 11:12:43 zookeeper | [2024-04-25 11:10:01,302] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 11:12:43 zookeeper | [2024-04-25 11:10:01,315] WARN ServletContext@o.e.j.s.ServletContextHandler@311bf055{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 11:12:43 zookeeper | [2024-04-25 11:10:01,328] INFO Started o.e.j.s.ServletContextHandler@311bf055{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 11:12:43 zookeeper | [2024-04-25 11:10:01,346] INFO Started ServerConnector@6f53b8a{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 11:12:43 zookeeper | [2024-04-25 11:10:01,346] INFO Started @763ms (org.eclipse.jetty.server.Server) 11:12:43 zookeeper | [2024-04-25 11:10:01,346] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,352] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 11:12:43 zookeeper | [2024-04-25 11:10:01,353] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 11:12:43 zookeeper | [2024-04-25 11:10:01,355] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 11:12:43 zookeeper | [2024-04-25 11:10:01,357] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 11:12:43 zookeeper | [2024-04-25 11:10:01,379] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 11:12:43 zookeeper | [2024-04-25 11:10:01,380] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 11:12:43 zookeeper | [2024-04-25 11:10:01,381] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 11:12:43 zookeeper | [2024-04-25 11:10:01,381] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 11:12:43 zookeeper | [2024-04-25 11:10:01,390] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 11:12:43 zookeeper | [2024-04-25 11:10:01,390] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:12:43 zookeeper | [2024-04-25 11:10:01,398] INFO Snapshot loaded in 16 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 11:12:43 zookeeper | [2024-04-25 11:10:01,404] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:12:43 zookeeper | [2024-04-25 11:10:01,406] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:12:43 zookeeper | [2024-04-25 11:10:01,420] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 11:12:43 zookeeper | [2024-04-25 11:10:01,422] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 11:12:43 zookeeper | [2024-04-25 11:10:01,445] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 11:12:43 zookeeper | [2024-04-25 11:10:01,445] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 11:12:43 zookeeper | [2024-04-25 11:10:02,712] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0670-toscapolicies.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0690-toscapolicy.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0730-toscaproperty.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0770-toscarequirement.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0780-toscarequirements.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-pap | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:12:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:12:43 policy-pap | security.protocol = PLAINTEXT 11:12:43 policy-pap | security.providers = null 11:12:43 policy-pap | send.buffer.bytes = 131072 11:12:43 policy-pap | session.timeout.ms = 45000 11:12:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-pap | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-pap | ssl.cipher.suites = null 11:12:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-pap | ssl.endpoint.identification.algorithm = https 11:12:43 policy-pap | ssl.engine.factory.class = null 11:12:43 policy-pap | ssl.key.password = null 11:12:43 policy-pap | ssl.keymanager.algorithm = SunX509 11:12:43 policy-pap | ssl.keystore.certificate.chain = null 11:12:43 policy-pap | ssl.keystore.key = null 11:12:43 policy-pap | ssl.keystore.location = null 11:12:43 policy-pap | ssl.keystore.password = null 11:12:43 policy-pap | ssl.keystore.type = JKS 11:12:43 policy-pap | ssl.protocol = TLSv1.3 11:12:43 policy-pap | ssl.provider = null 11:12:43 policy-pap | ssl.secure.random.implementation = null 11:12:43 policy-pap | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-pap | ssl.truststore.certificates = null 11:12:43 policy-pap | ssl.truststore.location = null 11:12:43 policy-pap | ssl.truststore.password = null 11:12:43 policy-pap | ssl.truststore.type = JKS 11:12:43 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-pap | 11:12:43 policy-pap | [2024-04-25T11:10:42.762+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-pap | [2024-04-25T11:10:42.762+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-pap | [2024-04-25T11:10:42.762+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043442759 11:12:43 policy-pap | [2024-04-25T11:10:42.765+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-1, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Subscribed to topic(s): policy-pdp-pap 11:12:43 policy-pap | [2024-04-25T11:10:42.766+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:12:43 policy-pap | allow.auto.create.topics = true 11:12:43 policy-pap | auto.commit.interval.ms = 5000 11:12:43 policy-pap | auto.include.jmx.reporter = true 11:12:43 policy-pap | auto.offset.reset = latest 11:12:43 policy-pap | bootstrap.servers = [kafka:9092] 11:12:43 policy-pap | check.crcs = true 11:12:43 policy-pap | client.dns.lookup = use_all_dns_ips 11:12:43 policy-pap | client.id = consumer-policy-pap-2 11:12:43 policy-pap | client.rack = 11:12:43 policy-pap | connections.max.idle.ms = 540000 11:12:43 policy-pap | default.api.timeout.ms = 60000 11:12:43 policy-pap | enable.auto.commit = true 11:12:43 policy-pap | exclude.internal.topics = true 11:12:43 policy-pap | fetch.max.bytes = 52428800 11:12:43 policy-pap | fetch.max.wait.ms = 500 11:12:43 policy-pap | fetch.min.bytes = 1 11:12:43 policy-pap | group.id = policy-pap 11:12:43 policy-pap | group.instance.id = null 11:12:43 policy-pap | heartbeat.interval.ms = 3000 11:12:43 policy-pap | interceptor.classes = [] 11:12:43 policy-pap | internal.leave.group.on.close = true 11:12:43 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:12:43 policy-pap | isolation.level = read_uncommitted 11:12:43 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-pap | max.partition.fetch.bytes = 1048576 11:12:43 policy-pap | max.poll.interval.ms = 300000 11:12:43 policy-pap | max.poll.records = 500 11:12:43 policy-pap | metadata.max.age.ms = 300000 11:12:43 policy-pap | metric.reporters = [] 11:12:43 policy-pap | metrics.num.samples = 2 11:12:43 policy-pap | metrics.recording.level = INFO 11:12:43 policy-pap | metrics.sample.window.ms = 30000 11:12:43 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.037955137Z level=info msg="Executing migration" id="Update uid value" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.038221711Z level=info msg="Migration successfully executed" id="Update uid value" duration=266.144µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.040974857Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.041836508Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=854.121µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.045884842Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.046699103Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=813.901µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.049840115Z level=info msg="Executing migration" id="create api_key table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.050841039Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.000664ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.054917322Z level=info msg="Executing migration" id="add index api_key.account_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.055798574Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=881.212µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.059157258Z level=info msg="Executing migration" id="add index api_key.key" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.06002322Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=865.232µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.064203566Z level=info msg="Executing migration" id="add index api_key.account_id_name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.065170559Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=966.673µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.069177522Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.070007712Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=830.36µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.072989542Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.073762963Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=773.551µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.077788936Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.078639817Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=850.701µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.083589713Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.091907644Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.316521ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.096447115Z level=info msg="Executing migration" id="create api_key table v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.097134494Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=685.309µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.100720711Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.101413141Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=697.12µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.10513361Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.106079633Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=946.023µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.109132373Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.110084357Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=951.734µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.113918717Z level=info msg="Executing migration" id="copy api_key v1 to v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.114331973Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=413.445µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.116650154Z level=info msg="Executing migration" id="Drop old table api_key_v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.117305922Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=656.068µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.121382847Z level=info msg="Executing migration" id="Update api_key table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.121493938Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=115.241µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.124691591Z level=info msg="Executing migration" id="Add expires to api_key table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.127514168Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.815557ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.131741994Z level=info msg="Executing migration" id="Add service account foreign key" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.134566091Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.824077ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.137917587Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.13819049Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=277.563µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.140656543Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.14341731Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.760917ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.148298284Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.151119162Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.821578ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.155272888Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.156171979Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=899.001µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.15999632Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.160642169Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=646.06µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.16598236Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.167036934Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.059884ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.172924812Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.17426838Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.332938ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.177728536Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.178657908Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=929.222µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.186118008Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.187213052Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.095554ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.193492095Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.193655817Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=164.522µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.197520209Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.197646631Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=127.282µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.201408151Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 11:12:43 policy-db-migrator | > upgrade 0820-toscatrigger.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-pap | receive.buffer.bytes = 65536 11:12:43 policy-pap | reconnect.backoff.max.ms = 1000 11:12:43 policy-pap | reconnect.backoff.ms = 50 11:12:43 policy-pap | request.timeout.ms = 30000 11:12:43 policy-pap | retry.backoff.ms = 100 11:12:43 policy-pap | sasl.client.callback.handler.class = null 11:12:43 policy-pap | sasl.jaas.config = null 11:12:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-pap | sasl.kerberos.service.name = null 11:12:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-pap | sasl.login.callback.handler.class = null 11:12:43 policy-pap | sasl.login.class = null 11:12:43 policy-pap | sasl.login.connect.timeout.ms = null 11:12:43 policy-pap | sasl.login.read.timeout.ms = null 11:12:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-pap | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.login.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.mechanism = GSSAPI 11:12:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-pap | sasl.oauthbearer.expected.audience = null 11:12:43 policy-pap | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:12:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:12:43 policy-pap | security.protocol = PLAINTEXT 11:12:43 policy-pap | security.providers = null 11:12:43 policy-pap | send.buffer.bytes = 131072 11:12:43 policy-pap | session.timeout.ms = 45000 11:12:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-pap | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-pap | ssl.cipher.suites = null 11:12:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-pap | ssl.endpoint.identification.algorithm = https 11:12:43 policy-pap | ssl.engine.factory.class = null 11:12:43 policy-pap | ssl.key.password = null 11:12:43 policy-pap | ssl.keymanager.algorithm = SunX509 11:12:43 policy-pap | ssl.keystore.certificate.chain = null 11:12:43 policy-pap | ssl.keystore.key = null 11:12:43 policy-pap | ssl.keystore.location = null 11:12:43 policy-pap | ssl.keystore.password = null 11:12:43 policy-pap | ssl.keystore.type = JKS 11:12:43 policy-pap | ssl.protocol = TLSv1.3 11:12:43 policy-pap | ssl.provider = null 11:12:43 policy-pap | ssl.secure.random.implementation = null 11:12:43 policy-pap | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-pap | ssl.truststore.certificates = null 11:12:43 policy-pap | ssl.truststore.location = null 11:12:43 policy-pap | ssl.truststore.password = null 11:12:43 policy-pap | ssl.truststore.type = JKS 11:12:43 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-pap | 11:12:43 policy-pap | [2024-04-25T11:10:42.772+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-pap | [2024-04-25T11:10:42.772+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-pap | [2024-04-25T11:10:42.772+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043442772 11:12:43 policy-pap | [2024-04-25T11:10:42.773+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 11:12:43 policy-pap | [2024-04-25T11:10:43.077+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 11:12:43 kafka | [2024-04-25 11:10:06,068] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 11:12:43 kafka | [2024-04-25 11:10:06,070] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 11:12:43 kafka | [2024-04-25 11:10:06,070] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,071] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,071] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,072] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,073] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,078] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 11:12:43 kafka | [2024-04-25 11:10:06,098] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:06,098] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 11:12:43 kafka | [2024-04-25 11:10:06,098] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 11:12:43 kafka | [2024-04-25 11:10:06,098] INFO Kafka startTimeMs: 1714043406091 (org.apache.kafka.common.utils.AppInfoParser) 11:12:43 kafka | [2024-04-25 11:10:06,100] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 11:12:43 kafka | [2024-04-25 11:10:06,177] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 11:12:43 kafka | [2024-04-25 11:10:06,347] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 11:12:43 kafka | [2024-04-25 11:10:06,353] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 11:12:43 kafka | [2024-04-25 11:10:06,362] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:11,100] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:11,101] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:45,054] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:12:43 kafka | [2024-04-25 11:10:45,055] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:12:43 kafka | [2024-04-25 11:10:45,057] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:45,067] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:45,118] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(g2fO5llxRxC6H0s6V7OP0w),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(VI4acjwHSb2Uel08QoceSw),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:45,119] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 11:12:43 kafka | [2024-04-25 11:10:45,122] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,123] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,124] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,125] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,126] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,127] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,133] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 policy-pap | [2024-04-25T11:10:43.236+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 11:12:43 policy-pap | [2024-04-25T11:10:43.463+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@297dff3a, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@30437e9c, org.springframework.security.web.context.SecurityContextHolderFilter@3051e476, org.springframework.security.web.header.HeaderWriterFilter@6719f206, org.springframework.security.web.authentication.logout.LogoutFilter@cea67b1, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@6ee1ddcf, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@6d9ee75a, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@36cf6377, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@2e057637, org.springframework.security.web.access.ExceptionTranslationFilter@333a2df2, org.springframework.security.web.access.intercept.AuthorizationFilter@41abee65] 11:12:43 policy-pap | [2024-04-25T11:10:44.264+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 11:12:43 policy-pap | [2024-04-25T11:10:44.360+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 11:12:43 policy-pap | [2024-04-25T11:10:44.382+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 11:12:43 policy-pap | [2024-04-25T11:10:44.402+00:00|INFO|ServiceManager|main] Policy PAP starting 11:12:43 policy-pap | [2024-04-25T11:10:44.402+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 11:12:43 policy-pap | [2024-04-25T11:10:44.403+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 11:12:43 policy-pap | [2024-04-25T11:10:44.404+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 11:12:43 policy-pap | [2024-04-25T11:10:44.404+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 11:12:43 policy-pap | [2024-04-25T11:10:44.404+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 11:12:43 policy-pap | [2024-04-25T11:10:44.404+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 11:12:43 policy-pap | [2024-04-25T11:10:44.406+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6f727b00-63f5-4665-9483-d1a4468f597f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@72a61e61 11:12:43 policy-pap | [2024-04-25T11:10:44.417+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6f727b00-63f5-4665-9483-d1a4468f597f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:12:43 policy-pap | [2024-04-25T11:10:44.418+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:12:43 policy-pap | allow.auto.create.topics = true 11:12:43 policy-pap | auto.commit.interval.ms = 5000 11:12:43 policy-pap | auto.include.jmx.reporter = true 11:12:43 policy-pap | auto.offset.reset = latest 11:12:43 policy-pap | bootstrap.servers = [kafka:9092] 11:12:43 policy-pap | check.crcs = true 11:12:43 policy-pap | client.dns.lookup = use_all_dns_ips 11:12:43 policy-pap | client.id = consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3 11:12:43 policy-pap | client.rack = 11:12:43 policy-pap | connections.max.idle.ms = 540000 11:12:43 policy-pap | default.api.timeout.ms = 60000 11:12:43 policy-pap | enable.auto.commit = true 11:12:43 policy-pap | exclude.internal.topics = true 11:12:43 policy-pap | fetch.max.bytes = 52428800 11:12:43 policy-pap | fetch.max.wait.ms = 500 11:12:43 policy-pap | fetch.min.bytes = 1 11:12:43 policy-pap | group.id = 6f727b00-63f5-4665-9483-d1a4468f597f 11:12:43 policy-pap | group.instance.id = null 11:12:43 policy-pap | heartbeat.interval.ms = 3000 11:12:43 policy-pap | interceptor.classes = [] 11:12:43 policy-pap | internal.leave.group.on.close = true 11:12:43 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:12:43 policy-pap | isolation.level = read_uncommitted 11:12:43 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-pap | max.partition.fetch.bytes = 1048576 11:12:43 policy-pap | max.poll.interval.ms = 300000 11:12:43 policy-pap | max.poll.records = 500 11:12:43 policy-pap | metadata.max.age.ms = 300000 11:12:43 policy-pap | metric.reporters = [] 11:12:43 policy-pap | metrics.num.samples = 2 11:12:43 policy-pap | metrics.recording.level = INFO 11:12:43 policy-pap | metrics.sample.window.ms = 30000 11:12:43 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:12:43 policy-pap | receive.buffer.bytes = 65536 11:12:43 policy-pap | reconnect.backoff.max.ms = 1000 11:12:43 policy-pap | reconnect.backoff.ms = 50 11:12:43 policy-pap | request.timeout.ms = 30000 11:12:43 policy-pap | retry.backoff.ms = 100 11:12:43 policy-pap | sasl.client.callback.handler.class = null 11:12:43 policy-pap | sasl.jaas.config = null 11:12:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-pap | sasl.kerberos.service.name = null 11:12:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-pap | sasl.login.callback.handler.class = null 11:12:43 policy-pap | sasl.login.class = null 11:12:43 policy-pap | sasl.login.connect.timeout.ms = null 11:12:43 policy-pap | sasl.login.read.timeout.ms = null 11:12:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-pap | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.login.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.mechanism = GSSAPI 11:12:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-pap | sasl.oauthbearer.expected.audience = null 11:12:43 policy-pap | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.204644414Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.236143ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.209160304Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.212218685Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.057831ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.216555733Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.216715835Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=159.952µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.220424594Z level=info msg="Executing migration" id="create quota table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.222637894Z level=info msg="Migration successfully executed" id="create quota table v1" duration=2.212199ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.22756637Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.229848909Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=2.282019ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.234861386Z level=info msg="Executing migration" id="Update quota table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.234981907Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=122.061µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.238762958Z level=info msg="Executing migration" id="create plugin_setting table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.239740461Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=977.753µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.244003737Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.246682483Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=2.683066ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.251235844Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.254582188Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.343834ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.257988873Z level=info msg="Executing migration" id="Update plugin_setting table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.258133146Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=136.943µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.26219765Z level=info msg="Executing migration" id="create session table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.263268364Z level=info msg="Migration successfully executed" id="create session table" duration=1.074624ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.267713073Z level=info msg="Executing migration" id="Drop old table playlist table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.267924916Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=211.583µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.272594978Z level=info msg="Executing migration" id="Drop old table playlist_item table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.272853152Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=257.864µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.27726229Z level=info msg="Executing migration" id="create playlist table v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.278551718Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.291068ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.283388022Z level=info msg="Executing migration" id="create playlist item table v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.28478115Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.394828ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.290925312Z level=info msg="Executing migration" id="Update playlist table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.291206875Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=286.433µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.295455732Z level=info msg="Executing migration" id="Update playlist_item table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.295563864Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=114.592µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.299481326Z level=info msg="Executing migration" id="Add playlist column created_at" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.303315957Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.835251ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.307826187Z level=info msg="Executing migration" id="Add playlist column updated_at" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.310970268Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.137651ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.314493246Z level=info msg="Executing migration" id="drop preferences table v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.314708499Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=215.353µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.318866124Z level=info msg="Executing migration" id="drop preferences table v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.319083636Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=218.192µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.323636157Z level=info msg="Executing migration" id="create preferences table v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.324562279Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=925.412µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.32829097Z level=info msg="Executing migration" id="Update preferences table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.328538333Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=249.683µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.371544594Z level=info msg="Executing migration" id="Add column team_id in preferences" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.374800768Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.258434ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.380640995Z level=info msg="Executing migration" id="Update team_id column values in preferences" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.380829708Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=189.213µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.387790881Z level=info msg="Executing migration" id="Add column week_start in preferences" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.394307027Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=6.499096ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.397440179Z level=info msg="Executing migration" id="Add column preferences.json_data" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.3998045Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.365161ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.402358834Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.402410355Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=51.821µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.410980799Z level=info msg="Executing migration" id="Add preferences index org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.412668032Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.688413ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.417265452Z level=info msg="Executing migration" id="Add preferences index user_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.418695892Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.43062ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.424730492Z level=info msg="Executing migration" id="create alert table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.427811183Z level=info msg="Migration successfully executed" id="create alert table v1" duration=3.086121ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.436849663Z level=info msg="Executing migration" id="add index alert org_id & id " 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.438697258Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.845505ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.443288619Z level=info msg="Executing migration" id="add index alert state" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.44412023Z level=info msg="Migration successfully executed" id="add index alert state" duration=830.771µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.447795209Z level=info msg="Executing migration" id="add index alert dashboard_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.448698341Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=902.792µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.453521735Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.45463893Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.116975ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.457733901Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.458708704Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=975.113µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.461743884Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.462700248Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=956.014µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.467370479Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.478250384Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.878555ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.481492987Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.482025324Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=531.817µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.485327858Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.486100188Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=772.34µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.491100855Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.49145529Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=355.785µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.494626792Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.495551204Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=924.192µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.499147023Z level=info msg="Executing migration" id="create alert_notification table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.50050947Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.362208ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.505292304Z level=info msg="Executing migration" id="Add column is_default" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.509338287Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.046163ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.512428089Z level=info msg="Executing migration" id="Add column frequency" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.515915216Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.487006ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.518763703Z level=info msg="Executing migration" id="Add column send_reminder" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.52228372Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.515877ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.525204909Z level=info msg="Executing migration" id="Add column disable_resolve_message" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.528897738Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.692169ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.53358856Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.534514802Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=925.462µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.537549393Z level=info msg="Executing migration" id="Update alert table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.537581943Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=33.09µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.540842776Z level=info msg="Executing migration" id="Update alert_notification table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.540911107Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=69.321µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.545845823Z level=info msg="Executing migration" id="create notification_journal table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.547014689Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.172426ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.550449725Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.552046506Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.59338ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.557504808Z level=info msg="Executing migration" id="drop alert_notification_journal" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.5583344Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=834.422µs 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,134] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,135] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,136] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,137] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,137] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,312] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.562544475Z level=info msg="Executing migration" id="create alert_notification_state table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.563819723Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.273628ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.567185047Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.568653776Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.467999ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.572001152Z level=info msg="Executing migration" id="Add for to alert table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.57566057Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.658438ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.580350773Z level=info msg="Executing migration" id="Add column uid in alert_notification" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.583950191Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.598658ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.58692726Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.587138063Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=210.133µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.590188503Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.591086875Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=898.072µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.595870289Z level=info msg="Executing migration" id="Remove unique index org_id_name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.596809821Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=938.232µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.600124616Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.605995264Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.866798ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.609128865Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.609225387Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=93.572µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.614098322Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.615588461Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.492139ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.618762213Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.620385725Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.623142ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.623489676Z level=info msg="Executing migration" id="Drop old annotation table v4" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.623605678Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=116.012µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.627965066Z level=info msg="Executing migration" id="create annotation table v5" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.629071311Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.106014ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.632278793Z level=info msg="Executing migration" id="add index annotation 0 v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.633921695Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.638922ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.637241879Z level=info msg="Executing migration" id="add index annotation 1 v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.638699698Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.458089ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.643259239Z level=info msg="Executing migration" id="add index annotation 2 v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.644710359Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.44729ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.648018473Z level=info msg="Executing migration" id="add index annotation 3 v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.649667104Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.647831ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.652716915Z level=info msg="Executing migration" id="add index annotation 4 v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.65378876Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.071375ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.65838863Z level=info msg="Executing migration" id="Update annotation table charset" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.6584184Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.19µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.660712871Z level=info msg="Executing migration" id="Add column region_id to annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.664772356Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.058485ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.667765155Z level=info msg="Executing migration" id="Drop category_id index" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.668646447Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=883.682µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.673034726Z level=info msg="Executing migration" id="Add column tags to annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.676982588Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.946852ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.679817116Z level=info msg="Executing migration" id="Create annotation_tag table v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.680518165Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=700.5µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.708106092Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.709650533Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.543571ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.714822682Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.716106609Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.283447ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.719688296Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:12:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:12:43 policy-pap | security.protocol = PLAINTEXT 11:12:43 policy-pap | security.providers = null 11:12:43 policy-pap | send.buffer.bytes = 131072 11:12:43 policy-pap | session.timeout.ms = 45000 11:12:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-pap | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-pap | ssl.cipher.suites = null 11:12:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-pap | ssl.endpoint.identification.algorithm = https 11:12:43 policy-pap | ssl.engine.factory.class = null 11:12:43 policy-pap | ssl.key.password = null 11:12:43 policy-pap | ssl.keymanager.algorithm = SunX509 11:12:43 policy-pap | ssl.keystore.certificate.chain = null 11:12:43 policy-pap | ssl.keystore.key = null 11:12:43 policy-pap | ssl.keystore.location = null 11:12:43 policy-pap | ssl.keystore.password = null 11:12:43 policy-pap | ssl.keystore.type = JKS 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.732248103Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.563197ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.735288984Z level=info msg="Executing migration" id="Create annotation_tag table v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.735804241Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=513.527µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.740029377Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.740711386Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=680.389µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.744185912Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.744659528Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=476.036µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.748355718Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.749165698Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=809.88µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.754488209Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.754815774Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=324.725µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.758371961Z level=info msg="Executing migration" id="Add created time to annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.764886457Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.499896ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.768110531Z level=info msg="Executing migration" id="Add updated time to annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.77104048Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.93104ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.775469358Z level=info msg="Executing migration" id="Add index for created in annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.776510513Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.040755ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.77936532Z level=info msg="Executing migration" id="Add index for updated in annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.780509615Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.143345ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.838336325Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.838834541Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=502.526µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.842368158Z level=info msg="Executing migration" id="Add epoch_end column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.848978825Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.612677ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.852306561Z level=info msg="Executing migration" id="Add index for epoch_end" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.853354034Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.046423ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.856474096Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.856711639Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=238.453µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.859508946Z level=info msg="Executing migration" id="Move region to single row" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.859968892Z level=info msg="Migration successfully executed" id="Move region to single row" duration=459.776µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.863865984Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.864808027Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=941.593µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.868160731Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.869068364Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=909.863µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.872098644Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.873603983Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.501719ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.876884367Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.878133374Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.245167ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.881087243Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.881942424Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=853.711µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.884828773Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.885701135Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=874.062µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.88835156Z level=info msg="Executing migration" id="Increase tags column to length 4096" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.888435042Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=86.972µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.891589193Z level=info msg="Executing migration" id="create test_data table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.892460965Z level=info msg="Migration successfully executed" id="create test_data table" duration=869.221µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.898690847Z level=info msg="Executing migration" id="create dashboard_version table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.900038556Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.347278ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.903338359Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.90496383Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.624711ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.908062023Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.909674514Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.61159ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.913154249Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.913383252Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=228.743µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.916470295Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.916938571Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=467.426µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.919875929Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.919979221Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=102.262µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.92294024Z level=info msg="Executing migration" id="create team table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.924094175Z level=info msg="Migration successfully executed" id="create team table" duration=1.153355ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.927063025Z level=info msg="Executing migration" id="add index team.org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.928725027Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.658072ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.931899719Z level=info msg="Executing migration" id="add unique index team_org_id_name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.932939783Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.036564ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.936719024Z level=info msg="Executing migration" id="Add column uid in team" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.942015674Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.29688ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.945565501Z level=info msg="Executing migration" id="Update uid column values in team" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.945766774Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=200.913µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.94773058Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.948701083Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=969.933µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.951438269Z level=info msg="Executing migration" id="create team member table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.952191709Z level=info msg="Migration successfully executed" id="create team member table" duration=752.71µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.955218489Z level=info msg="Executing migration" id="add index team_member.org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.956158502Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=943.413µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.959308014Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.960267836Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=961.542µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.963222896Z level=info msg="Executing migration" id="add index team_member.team_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.964158609Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=936.343µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.967200089Z level=info msg="Executing migration" id="Add column email to team table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.971763659Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.56297ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.975166674Z level=info msg="Executing migration" id="Add column external to team_member table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.979796036Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.624282ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.982801426Z level=info msg="Executing migration" id="Add column permission to team_member table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.987521069Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.716553ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.99058556Z level=info msg="Executing migration" id="create dashboard acl table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.991569063Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=983.343µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.994577123Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.995515426Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=934.833µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.998365744Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:01.999390457Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.024333ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.005426403Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.006412933Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=986.66µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.009963327Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.011654774Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.692747ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.014962327Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.016484382Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.522725ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.020049937Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.021022117Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=973.9µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.024016617Z level=info msg="Executing migration" id="add index dashboard_permission" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.025042056Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.024919ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.028159397Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.028737564Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=577.597µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.031617572Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 11:12:43 policy-pap | ssl.protocol = TLSv1.3 11:12:43 policy-pap | ssl.provider = null 11:12:43 policy-pap | ssl.secure.random.implementation = null 11:12:43 policy-pap | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-pap | ssl.truststore.certificates = null 11:12:43 policy-pap | ssl.truststore.location = null 11:12:43 policy-pap | ssl.truststore.password = null 11:12:43 policy-pap | ssl.truststore.type = JKS 11:12:43 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-pap | 11:12:43 policy-pap | [2024-04-25T11:10:44.424+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-pap | [2024-04-25T11:10:44.424+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-pap | [2024-04-25T11:10:44.424+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043444424 11:12:43 policy-pap | [2024-04-25T11:10:44.425+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Subscribed to topic(s): policy-pdp-pap 11:12:43 policy-pap | [2024-04-25T11:10:44.425+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 11:12:43 policy-pap | [2024-04-25T11:10:44.425+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=0858c40b-5ee4-4dc2-bdca-6175b27b5881, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@7ffe8a82 11:12:43 policy-pap | [2024-04-25T11:10:44.425+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=0858c40b-5ee4-4dc2-bdca-6175b27b5881, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:12:43 policy-pap | [2024-04-25T11:10:44.425+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:12:43 policy-pap | allow.auto.create.topics = true 11:12:43 policy-pap | auto.commit.interval.ms = 5000 11:12:43 policy-pap | auto.include.jmx.reporter = true 11:12:43 policy-pap | auto.offset.reset = latest 11:12:43 policy-pap | bootstrap.servers = [kafka:9092] 11:12:43 policy-pap | check.crcs = true 11:12:43 policy-pap | client.dns.lookup = use_all_dns_ips 11:12:43 policy-pap | client.id = consumer-policy-pap-4 11:12:43 policy-pap | client.rack = 11:12:43 policy-pap | connections.max.idle.ms = 540000 11:12:43 policy-pap | default.api.timeout.ms = 60000 11:12:43 policy-pap | enable.auto.commit = true 11:12:43 policy-pap | exclude.internal.topics = true 11:12:43 policy-pap | fetch.max.bytes = 52428800 11:12:43 policy-pap | fetch.max.wait.ms = 500 11:12:43 policy-pap | fetch.min.bytes = 1 11:12:43 policy-pap | group.id = policy-pap 11:12:43 policy-pap | group.instance.id = null 11:12:43 policy-pap | heartbeat.interval.ms = 3000 11:12:43 policy-pap | interceptor.classes = [] 11:12:43 policy-pap | internal.leave.group.on.close = true 11:12:43 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:12:43 policy-pap | isolation.level = read_uncommitted 11:12:43 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-pap | max.partition.fetch.bytes = 1048576 11:12:43 policy-pap | max.poll.interval.ms = 300000 11:12:43 policy-pap | max.poll.records = 500 11:12:43 policy-pap | metadata.max.age.ms = 300000 11:12:43 policy-pap | metric.reporters = [] 11:12:43 policy-pap | metrics.num.samples = 2 11:12:43 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0100-pdp.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 11:12:43 policy-db-migrator | JOIN pdpstatistics b 11:12:43 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 11:12:43 policy-db-migrator | SET a.id = b.id 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0210-sequence.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0220-sequence.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 11:12:43 policy-pap | metrics.recording.level = INFO 11:12:43 policy-pap | metrics.sample.window.ms = 30000 11:12:43 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:12:43 policy-pap | receive.buffer.bytes = 65536 11:12:43 policy-pap | reconnect.backoff.max.ms = 1000 11:12:43 policy-pap | reconnect.backoff.ms = 50 11:12:43 policy-pap | request.timeout.ms = 30000 11:12:43 policy-pap | retry.backoff.ms = 100 11:12:43 policy-pap | sasl.client.callback.handler.class = null 11:12:43 policy-pap | sasl.jaas.config = null 11:12:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-pap | sasl.kerberos.service.name = null 11:12:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-pap | sasl.login.callback.handler.class = null 11:12:43 policy-pap | sasl.login.class = null 11:12:43 policy-pap | sasl.login.connect.timeout.ms = null 11:12:43 policy-pap | sasl.login.read.timeout.ms = null 11:12:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-pap | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.login.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.mechanism = GSSAPI 11:12:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-pap | sasl.oauthbearer.expected.audience = null 11:12:43 policy-pap | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:12:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:12:43 policy-pap | security.protocol = PLAINTEXT 11:12:43 policy-pap | security.providers = null 11:12:43 policy-pap | send.buffer.bytes = 131072 11:12:43 policy-pap | session.timeout.ms = 45000 11:12:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-pap | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-pap | ssl.cipher.suites = null 11:12:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-pap | ssl.endpoint.identification.algorithm = https 11:12:43 policy-pap | ssl.engine.factory.class = null 11:12:43 policy-pap | ssl.key.password = null 11:12:43 policy-pap | ssl.keymanager.algorithm = SunX509 11:12:43 policy-pap | ssl.keystore.certificate.chain = null 11:12:43 policy-pap | ssl.keystore.key = null 11:12:43 policy-pap | ssl.keystore.location = null 11:12:43 policy-pap | ssl.keystore.password = null 11:12:43 policy-pap | ssl.keystore.type = JKS 11:12:43 kafka | [2024-04-25 11:10:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.031863194Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=245.922µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.034634242Z level=info msg="Executing migration" id="create tag table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.035728812Z level=info msg="Migration successfully executed" id="create tag table" duration=1.09384ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.079397594Z level=info msg="Executing migration" id="add index tag.key_value" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.080987849Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.589575ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.084361982Z level=info msg="Executing migration" id="create login attempt table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.085260782Z level=info msg="Migration successfully executed" id="create login attempt table" duration=948.02µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.088785626Z level=info msg="Executing migration" id="add index login_attempt.username" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.089720345Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=934.729µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.092565603Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.093519243Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=949.55µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.09629322Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.111920344Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.618734ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.115036554Z level=info msg="Executing migration" id="create login_attempt v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.115632201Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=596.157µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.118489419Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.119384667Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=899.338µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.122072484Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.122362417Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=289.773µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.124512148Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.125104494Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=591.676µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.127553888Z level=info msg="Executing migration" id="create user auth table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.128246735Z level=info msg="Migration successfully executed" id="create user auth table" duration=692.457µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.131101143Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.132004181Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=903.248µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.134801679Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.13489525Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=93.201µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.137613337Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.142406484Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.792337ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.145909268Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.150645915Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.736087ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.153342312Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.158117529Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.774367ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.161732805Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.166447011Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.713446ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.169600732Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.170516221Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=914.969µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.173217428Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.177982695Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.764837ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.180894443Z level=info msg="Executing migration" id="create server_lock table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.181693601Z level=info msg="Migration successfully executed" id="create server_lock table" duration=795.498µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.184693071Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.185473239Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=779.918µs 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,318] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,319] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,320] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,321] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0120-toscatrigger.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0140-toscaparameter.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0150-toscaproperty.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 11:12:43 kafka | [2024-04-25 11:10:45,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,326] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,327] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,328] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,329] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,331] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,334] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.18868106Z level=info msg="Executing migration" id="create user auth token table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.189555859Z level=info msg="Migration successfully executed" id="create user auth token table" duration=874.969µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.196671879Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.197653718Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=984.819µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.200944561Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.202635728Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.691407ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.206096812Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.207413184Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.317052ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.210452844Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0100-upgrade.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | select 'upgrade to 1100 completed' as msg 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | msg 11:12:43 policy-db-migrator | upgrade to 1100 completed 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.217733876Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.271212ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.222421313Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.22421833Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.797347ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.23539773Z level=info msg="Executing migration" id="create cache_data table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.23640455Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.0061ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.23941973Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.24046397Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.04433ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.243280498Z level=info msg="Executing migration" id="create short_url table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.244308278Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.02746ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.249868813Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.250976504Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.104461ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.253974043Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.254126045Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=151.322µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.256400997Z level=info msg="Executing migration" id="delete alert_definition table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.256609099Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=207.772µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.262118684Z level=info msg="Executing migration" id="recreate alert_definition table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.263077313Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=957.859µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.265656828Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.266712169Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.055231ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.269776779Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.270873019Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.09654ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.27593614Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.276124431Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=186.371µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.278929209Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.279927129Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=998.02µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.283668426Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.285424723Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.756477ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.292100409Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.29326532Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.165361ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.296045857Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.297172658Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.126461ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.300158128Z level=info msg="Executing migration" id="Add column paused in alert_definition" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.306318829Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.159751ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.31141811Z level=info msg="Executing migration" id="drop alert_definition table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.313355638Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.936909ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.320867232Z level=info msg="Executing migration" id="delete alert_definition_version table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.321037264Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=170.762µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.323500348Z level=info msg="Executing migration" id="recreate alert_definition_version table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.324288636Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=785.278µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.329257325Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.330379605Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.12271ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.333177233Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.334246824Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.069731ms 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0120-audit_sequence.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | TRUNCATE TABLE sequence 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP TABLE pdpstatistics 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | DROP TABLE statistics_sequence 11:12:43 policy-db-migrator | -------------- 11:12:43 policy-db-migrator | 11:12:43 policy-db-migrator | policyadmin: OK: upgrade (1300) 11:12:43 policy-db-migrator | name version 11:12:43 policy-db-migrator | policyadmin 1300 11:12:43 policy-db-migrator | ID script operation from_version to_version tag success atTime 11:12:43 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:09 11:12:43 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:09 11:12:43 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:09 11:12:43 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.339134662Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.339290543Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=155.551µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.34205675Z level=info msg="Executing migration" id="drop alert_definition_version table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.343400084Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.342944ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.346229082Z level=info msg="Executing migration" id="create alert_instance table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.347254361Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.021819ms 11:12:43 policy-pap | ssl.protocol = TLSv1.3 11:12:43 policy-pap | ssl.provider = null 11:12:43 policy-pap | ssl.secure.random.implementation = null 11:12:43 policy-pap | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-pap | ssl.truststore.certificates = null 11:12:43 policy-pap | ssl.truststore.location = null 11:12:43 policy-pap | ssl.truststore.password = null 11:12:43 policy-pap | ssl.truststore.type = JKS 11:12:43 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:12:43 policy-pap | 11:12:43 policy-pap | [2024-04-25T11:10:44.431+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-pap | [2024-04-25T11:10:44.431+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-pap | [2024-04-25T11:10:44.431+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043444431 11:12:43 policy-pap | [2024-04-25T11:10:44.431+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 11:12:43 policy-pap | [2024-04-25T11:10:44.432+00:00|INFO|ServiceManager|main] Policy PAP starting topics 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,337] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,344] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 policy-pap | [2024-04-25T11:10:44.432+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=0858c40b-5ee4-4dc2-bdca-6175b27b5881, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:12:43 policy-pap | [2024-04-25T11:10:44.432+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=6f727b00-63f5-4665-9483-d1a4468f597f, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:12:43 policy-pap | [2024-04-25T11:10:44.432+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2b8fa7ed-4f2b-4ba2-bab5-8495ea45c979, alive=false, publisher=null]]: starting 11:12:43 policy-pap | [2024-04-25T11:10:44.449+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:12:43 policy-pap | acks = -1 11:12:43 policy-pap | auto.include.jmx.reporter = true 11:12:43 policy-pap | batch.size = 16384 11:12:43 policy-pap | bootstrap.servers = [kafka:9092] 11:12:43 policy-pap | buffer.memory = 33554432 11:12:43 policy-pap | client.dns.lookup = use_all_dns_ips 11:12:43 policy-pap | client.id = producer-1 11:12:43 policy-pap | compression.type = none 11:12:43 policy-pap | connections.max.idle.ms = 540000 11:12:43 policy-pap | delivery.timeout.ms = 120000 11:12:43 policy-pap | enable.idempotence = true 11:12:43 policy-pap | interceptor.classes = [] 11:12:43 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:12:43 policy-pap | linger.ms = 0 11:12:43 policy-pap | max.block.ms = 60000 11:12:43 policy-pap | max.in.flight.requests.per.connection = 5 11:12:43 policy-pap | max.request.size = 1048576 11:12:43 policy-pap | metadata.max.age.ms = 300000 11:12:43 policy-pap | metadata.max.idle.ms = 300000 11:12:43 policy-pap | metric.reporters = [] 11:12:43 policy-pap | metrics.num.samples = 2 11:12:43 policy-pap | metrics.recording.level = INFO 11:12:43 policy-pap | metrics.sample.window.ms = 30000 11:12:43 policy-pap | partitioner.adaptive.partitioning.enable = true 11:12:43 policy-pap | partitioner.availability.timeout.ms = 0 11:12:43 policy-pap | partitioner.class = null 11:12:43 policy-pap | partitioner.ignore.keys = false 11:12:43 policy-pap | receive.buffer.bytes = 32768 11:12:43 policy-pap | reconnect.backoff.max.ms = 1000 11:12:43 policy-pap | reconnect.backoff.ms = 50 11:12:43 policy-pap | request.timeout.ms = 30000 11:12:43 policy-pap | retries = 2147483647 11:12:43 policy-pap | retry.backoff.ms = 100 11:12:43 policy-pap | sasl.client.callback.handler.class = null 11:12:43 policy-pap | sasl.jaas.config = null 11:12:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-pap | sasl.kerberos.service.name = null 11:12:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-pap | sasl.login.callback.handler.class = null 11:12:43 policy-pap | sasl.login.class = null 11:12:43 policy-pap | sasl.login.connect.timeout.ms = null 11:12:43 policy-pap | sasl.login.read.timeout.ms = null 11:12:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-pap | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.login.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.mechanism = GSSAPI 11:12:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-pap | sasl.oauthbearer.expected.audience = null 11:12:43 policy-pap | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:12:43 kafka | [2024-04-25 11:10:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,348] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,348] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,348] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,348] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,348] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,348] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,348] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:10 11:12:43 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.352017539Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.353124199Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.10574ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.355827436Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.356939418Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.109422ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.359620334Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.365693274Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.07184ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.370414771Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.37140589Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=990.6µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.374564021Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.375587891Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.02344ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.381817863Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.41100831Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=29.183937ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.45775031Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.478775827Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=21.024087ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.487789766Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.489324661Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.539585ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.494374251Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.495223579Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=848.868µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.500986126Z level=info msg="Executing migration" id="add current_reason column related to current_state" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.506963165Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.971679ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.511246717Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.516121736Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.872729ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.518959484Z level=info msg="Executing migration" id="create alert_rule table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.519780982Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=820.968µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.522546629Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.52373385Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.183091ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.540225883Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.542553525Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=2.328282ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.54807217Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.549385852Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.313182ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.55215946Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.552344543Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=184.303µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.557716834Z level=info msg="Executing migration" id="add column for to alert_rule" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.565003277Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.315503ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.571721292Z level=info msg="Executing migration" id="add column annotations to alert_rule" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.57654949Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.828628ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.579943983Z level=info msg="Executing migration" id="add column labels to alert_rule" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.584135155Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.188152ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.59078477Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.59269137Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.912879ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.597451517Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.598698569Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.245901ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.608565226Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.614729487Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.162251ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.622371242Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.629044338Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.672526ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.632829854Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.633714334Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=884.2µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.643180717Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.650376128Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.198851ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.67700506Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.683828406Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.823936ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.687277401Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.687358982Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=82.861µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.691000068Z level=info msg="Executing migration" id="create alert_rule_version table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.691877506Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=878.388µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.697819034Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.699506601Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.688197ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.702705313Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.704617922Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.912979ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.707733192Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.707802673Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=70.371µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.719330057Z level=info msg="Executing migration" id="add column for to alert_rule_version" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.725520397Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.19166ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.729282975Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.735138803Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.859388ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.741333733Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.746848768Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.523305ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.752386202Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.758633074Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.246432ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.762669794Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.770300578Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.634554ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.800288984Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 11:12:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:12:43 policy-pap | security.protocol = PLAINTEXT 11:12:43 policy-pap | security.providers = null 11:12:43 policy-pap | send.buffer.bytes = 131072 11:12:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-pap | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-pap | ssl.cipher.suites = null 11:12:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-pap | ssl.endpoint.identification.algorithm = https 11:12:43 policy-pap | ssl.engine.factory.class = null 11:12:43 policy-pap | ssl.key.password = null 11:12:43 policy-pap | ssl.keymanager.algorithm = SunX509 11:12:43 policy-pap | ssl.keystore.certificate.chain = null 11:12:43 policy-pap | ssl.keystore.key = null 11:12:43 policy-pap | ssl.keystore.location = null 11:12:43 policy-pap | ssl.keystore.password = null 11:12:43 policy-pap | ssl.keystore.type = JKS 11:12:43 policy-pap | ssl.protocol = TLSv1.3 11:12:43 policy-pap | ssl.provider = null 11:12:43 policy-pap | ssl.secure.random.implementation = null 11:12:43 policy-pap | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-pap | ssl.truststore.certificates = null 11:12:43 policy-pap | ssl.truststore.location = null 11:12:43 policy-pap | ssl.truststore.password = null 11:12:43 policy-pap | ssl.truststore.type = JKS 11:12:43 policy-pap | transaction.timeout.ms = 60000 11:12:43 policy-pap | transactional.id = null 11:12:43 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:12:43 policy-pap | 11:12:43 policy-pap | [2024-04-25T11:10:44.461+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 11:12:43 policy-pap | [2024-04-25T11:10:44.478+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-pap | [2024-04-25T11:10:44.478+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-pap | [2024-04-25T11:10:44.478+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043444478 11:12:43 policy-pap | [2024-04-25T11:10:44.478+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2b8fa7ed-4f2b-4ba2-bab5-8495ea45c979, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:12:43 policy-pap | [2024-04-25T11:10:44.478+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2efe004d-31bc-492c-ae39-4b69f6f16f5b, alive=false, publisher=null]]: starting 11:12:43 policy-pap | [2024-04-25T11:10:44.478+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:12:43 policy-pap | acks = -1 11:12:43 policy-pap | auto.include.jmx.reporter = true 11:12:43 policy-pap | batch.size = 16384 11:12:43 policy-pap | bootstrap.servers = [kafka:9092] 11:12:43 policy-pap | buffer.memory = 33554432 11:12:43 policy-pap | client.dns.lookup = use_all_dns_ips 11:12:43 policy-pap | client.id = producer-2 11:12:43 policy-pap | compression.type = none 11:12:43 policy-pap | connections.max.idle.ms = 540000 11:12:43 policy-pap | delivery.timeout.ms = 120000 11:12:43 policy-pap | enable.idempotence = true 11:12:43 policy-pap | interceptor.classes = [] 11:12:43 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:12:43 policy-pap | linger.ms = 0 11:12:43 policy-pap | max.block.ms = 60000 11:12:43 policy-pap | max.in.flight.requests.per.connection = 5 11:12:43 policy-pap | max.request.size = 1048576 11:12:43 policy-pap | metadata.max.age.ms = 300000 11:12:43 policy-pap | metadata.max.idle.ms = 300000 11:12:43 policy-pap | metric.reporters = [] 11:12:43 policy-pap | metrics.num.samples = 2 11:12:43 policy-pap | metrics.recording.level = INFO 11:12:43 policy-pap | metrics.sample.window.ms = 30000 11:12:43 policy-pap | partitioner.adaptive.partitioning.enable = true 11:12:43 policy-pap | partitioner.availability.timeout.ms = 0 11:12:43 policy-pap | partitioner.class = null 11:12:43 policy-pap | partitioner.ignore.keys = false 11:12:43 policy-pap | receive.buffer.bytes = 32768 11:12:43 policy-pap | reconnect.backoff.max.ms = 1000 11:12:43 policy-pap | reconnect.backoff.ms = 50 11:12:43 kafka | [2024-04-25 11:10:45,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:11 11:12:43 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:12 11:12:43 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 kafka | [2024-04-25 11:10:45,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,393] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,393] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,393] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,393] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,396] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 11:12:43 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:13 11:12:43 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241110090800u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:14 11:12:43 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2504241110090900u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.800465716Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=180.702µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.803410905Z level=info msg="Executing migration" id=create_alert_configuration_table 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.804152062Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=741.267µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.813233022Z level=info msg="Executing migration" id="Add column default in alert_configuration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.818002818Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.772326ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.821380392Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.821460433Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=80.531µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.825616234Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.833718153Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.100539ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.838975335Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.840180387Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.205222ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.843069305Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.850219006Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.148961ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.855463018Z level=info msg="Executing migration" id=create_ngalert_configuration_table 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.856089364Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=622.946µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.858830631Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.859601638Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=770.887µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.864527327Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.871439045Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.908718ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.877883439Z level=info msg="Executing migration" id="create provenance_type table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.880507934Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=2.624785ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.897225739Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 11:12:43 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2504241110091000u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2504241110091100u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2504241110091200u 1 2024-04-25 11:10:15 11:12:43 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2504241110091200u 1 2024-04-25 11:10:16 11:12:43 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2504241110091200u 1 2024-04-25 11:10:16 11:12:43 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2504241110091200u 1 2024-04-25 11:10:16 11:12:43 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2504241110091300u 1 2024-04-25 11:10:16 11:12:43 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2504241110091300u 1 2024-04-25 11:10:16 11:12:43 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2504241110091300u 1 2024-04-25 11:10:16 11:12:43 policy-db-migrator | policyadmin: OK @ 1300 11:12:43 policy-pap | request.timeout.ms = 30000 11:12:43 policy-pap | retries = 2147483647 11:12:43 policy-pap | retry.backoff.ms = 100 11:12:43 policy-pap | sasl.client.callback.handler.class = null 11:12:43 policy-pap | sasl.jaas.config = null 11:12:43 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:12:43 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:12:43 policy-pap | sasl.kerberos.service.name = null 11:12:43 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:12:43 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:12:43 policy-pap | sasl.login.callback.handler.class = null 11:12:43 policy-pap | sasl.login.class = null 11:12:43 policy-pap | sasl.login.connect.timeout.ms = null 11:12:43 policy-pap | sasl.login.read.timeout.ms = null 11:12:43 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:12:43 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:12:43 policy-pap | sasl.login.refresh.window.factor = 0.8 11:12:43 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:12:43 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.login.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.mechanism = GSSAPI 11:12:43 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:12:43 policy-pap | sasl.oauthbearer.expected.audience = null 11:12:43 policy-pap | sasl.oauthbearer.expected.issuer = null 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:12:43 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:12:43 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:12:43 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:12:43 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:12:43 policy-pap | security.protocol = PLAINTEXT 11:12:43 policy-pap | security.providers = null 11:12:43 policy-pap | send.buffer.bytes = 131072 11:12:43 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:12:43 policy-pap | socket.connection.setup.timeout.ms = 10000 11:12:43 policy-pap | ssl.cipher.suites = null 11:12:43 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:12:43 policy-pap | ssl.endpoint.identification.algorithm = https 11:12:43 policy-pap | ssl.engine.factory.class = null 11:12:43 policy-pap | ssl.key.password = null 11:12:43 policy-pap | ssl.keymanager.algorithm = SunX509 11:12:43 policy-pap | ssl.keystore.certificate.chain = null 11:12:43 policy-pap | ssl.keystore.key = null 11:12:43 policy-pap | ssl.keystore.location = null 11:12:43 policy-pap | ssl.keystore.password = null 11:12:43 policy-pap | ssl.keystore.type = JKS 11:12:43 policy-pap | ssl.protocol = TLSv1.3 11:12:43 policy-pap | ssl.provider = null 11:12:43 policy-pap | ssl.secure.random.implementation = null 11:12:43 policy-pap | ssl.trustmanager.algorithm = PKIX 11:12:43 policy-pap | ssl.truststore.certificates = null 11:12:43 policy-pap | ssl.truststore.location = null 11:12:43 policy-pap | ssl.truststore.password = null 11:12:43 policy-pap | ssl.truststore.type = JKS 11:12:43 policy-pap | transaction.timeout.ms = 60000 11:12:43 policy-pap | transactional.id = null 11:12:43 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:12:43 policy-pap | 11:12:43 policy-pap | [2024-04-25T11:10:44.479+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 11:12:43 policy-pap | [2024-04-25T11:10:44.482+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:12:43 policy-pap | [2024-04-25T11:10:44.482+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:12:43 policy-pap | [2024-04-25T11:10:44.482+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714043444482 11:12:43 policy-pap | [2024-04-25T11:10:44.482+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2efe004d-31bc-492c-ae39-4b69f6f16f5b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:12:43 policy-pap | [2024-04-25T11:10:44.483+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 11:12:43 policy-pap | [2024-04-25T11:10:44.483+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 11:12:43 policy-pap | [2024-04-25T11:10:44.484+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 11:12:43 policy-pap | [2024-04-25T11:10:44.485+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 11:12:43 policy-pap | [2024-04-25T11:10:44.487+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 11:12:43 policy-pap | [2024-04-25T11:10:44.489+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 11:12:43 policy-pap | [2024-04-25T11:10:44.490+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 11:12:43 policy-pap | [2024-04-25T11:10:44.490+00:00|INFO|TimerManager|Thread-9] timer manager update started 11:12:43 policy-pap | [2024-04-25T11:10:44.491+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 11:12:43 policy-pap | [2024-04-25T11:10:44.491+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 11:12:43 policy-pap | [2024-04-25T11:10:44.492+00:00|INFO|ServiceManager|main] Policy PAP started 11:12:43 policy-pap | [2024-04-25T11:10:44.493+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.174 seconds (process running for 11.969) 11:12:43 policy-pap | [2024-04-25T11:10:44.993+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: hj8fcuYTRGyyshpZV-zZWg 11:12:43 policy-pap | [2024-04-25T11:10:44.994+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: hj8fcuYTRGyyshpZV-zZWg 11:12:43 policy-pap | [2024-04-25T11:10:44.994+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 11:12:43 policy-pap | [2024-04-25T11:10:44.996+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: hj8fcuYTRGyyshpZV-zZWg 11:12:43 kafka | [2024-04-25 11:10:45,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,397] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,398] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,399] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,400] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,401] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,401] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,402] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 11:12:43 kafka | [2024-04-25 11:10:45,403] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,466] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.899006616Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.786177ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.904198357Z level=info msg="Executing migration" id="create alert_image table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.905499121Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.306054ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.91150367Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.912981594Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.484224ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.915688831Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.915924863Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=238.802µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.919082414Z level=info msg="Executing migration" id=create_alert_configuration_history_table 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.920183735Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.101951ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.925958062Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:02.927570347Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.622175ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.028440417Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.030183695Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.034943563Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.036044594Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=1.100241ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.039919974Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.040953464Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.03299ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.045350508Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.051663262Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.305564ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.056125406Z level=info msg="Executing migration" id="create library_element table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.05840994Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=2.280944ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.117108033Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.119854161Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.747938ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.156848614Z level=info msg="Executing migration" id="create library_element_connection table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.158285979Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.439735ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.163958057Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.165121058Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.162891ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.170482182Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.171741335Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.255363ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.174983858Z level=info msg="Executing migration" id="increase max description length to 2048" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.17515978Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=177.172µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.17817589Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.178316812Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=140.882µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.182548684Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.183043609Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=494.975µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.192467214Z level=info msg="Executing migration" id="create data_keys table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.195633246Z level=info msg="Migration successfully executed" id="create data_keys table" duration=3.173802ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.200523596Z level=info msg="Executing migration" id="create secrets table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.20202816Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.504084ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.206079052Z level=info msg="Executing migration" id="rename data_keys name column to id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.237893313Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=31.811791ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.244032895Z level=info msg="Executing migration" id="add name column into data_keys" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.253279799Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.247594ms 11:12:43 kafka | [2024-04-25 11:10:45,480] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,483] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,484] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,486] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,500] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,501] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,502] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,502] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,502] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,514] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,515] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,515] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,515] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,515] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,522] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,523] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,523] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,523] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,523] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,538] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,540] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,540] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,540] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,540] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,549] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,549] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,549] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,550] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,550] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,559] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,560] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,560] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,560] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,560] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,568] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 policy-pap | [2024-04-25T11:10:45.098+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 11:12:43 policy-pap | [2024-04-25T11:10:45.104+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.105+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Cluster ID: hj8fcuYTRGyyshpZV-zZWg 11:12:43 policy-pap | [2024-04-25T11:10:45.143+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 11:12:43 policy-pap | [2024-04-25T11:10:45.155+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 11:12:43 policy-pap | [2024-04-25T11:10:45.208+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.267+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.319+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.396+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.425+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.504+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.537+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.613+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.647+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.725+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.757+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.843+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.865+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.948+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:45.972+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 19 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:46.059+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:46.085+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 21 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:46.167+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:46.194+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 23 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:12:43 policy-pap | [2024-04-25T11:10:46.287+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:12:43 policy-pap | [2024-04-25T11:10:46.295+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] (Re-)joining group 11:12:43 policy-pap | [2024-04-25T11:10:46.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:12:43 policy-pap | [2024-04-25T11:10:46.303+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 11:12:43 policy-pap | [2024-04-25T11:10:46.371+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Request joining group due to: need to re-join with the given member-id: consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3-800d0e89-fd79-4b76-8fc5-5d0e2b9be0e7 11:12:43 policy-pap | [2024-04-25T11:10:46.372+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 11:12:43 policy-pap | [2024-04-25T11:10:46.372+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] (Re-)joining group 11:12:43 policy-pap | [2024-04-25T11:10:46.372+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-e708bf9a-aba0-402e-860d-b682878f7611 11:12:43 policy-pap | [2024-04-25T11:10:46.372+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 11:12:43 policy-pap | [2024-04-25T11:10:46.372+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 11:12:43 policy-pap | [2024-04-25T11:10:49.401+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3-800d0e89-fd79-4b76-8fc5-5d0e2b9be0e7', protocol='range'} 11:12:43 policy-pap | [2024-04-25T11:10:49.402+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-e708bf9a-aba0-402e-860d-b682878f7611', protocol='range'} 11:12:43 policy-pap | [2024-04-25T11:10:49.411+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-e708bf9a-aba0-402e-860d-b682878f7611=Assignment(partitions=[policy-pdp-pap-0])} 11:12:43 policy-pap | [2024-04-25T11:10:49.411+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Finished assignment for group at generation 1: {consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3-800d0e89-fd79-4b76-8fc5-5d0e2b9be0e7=Assignment(partitions=[policy-pdp-pap-0])} 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.255701824Z level=info msg="Executing migration" id="copy data_keys id column values into name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.255956886Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=254.302µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.25936815Z level=info msg="Executing migration" id="rename data_keys name column to label" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.296445304Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=37.076314ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.300121851Z level=info msg="Executing migration" id="rename data_keys id column back to name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.333140465Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=33.003694ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.338009975Z level=info msg="Executing migration" id="create kv_store table v1" 11:12:43 policy-pap | [2024-04-25T11:10:49.447+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-e708bf9a-aba0-402e-860d-b682878f7611', protocol='range'} 11:12:43 policy-pap | [2024-04-25T11:10:49.447+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3-800d0e89-fd79-4b76-8fc5-5d0e2b9be0e7', protocol='range'} 11:12:43 policy-pap | [2024-04-25T11:10:49.447+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:12:43 policy-pap | [2024-04-25T11:10:49.448+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:12:43 policy-pap | [2024-04-25T11:10:49.453+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 11:12:43 policy-pap | [2024-04-25T11:10:49.453+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Adding newly assigned partitions: policy-pdp-pap-0 11:12:43 policy-pap | [2024-04-25T11:10:49.474+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Found no committed offset for partition policy-pdp-pap-0 11:12:43 policy-pap | [2024-04-25T11:10:49.475+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 11:12:43 policy-pap | [2024-04-25T11:10:49.494+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:12:43 policy-pap | [2024-04-25T11:10:49.494+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3, groupId=6f727b00-63f5-4665-9483-d1a4468f597f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:12:43 policy-pap | [2024-04-25T11:10:52.706+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 11:12:43 policy-pap | [2024-04-25T11:10:52.707+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 11:12:43 policy-pap | [2024-04-25T11:10:52.710+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 3 ms 11:12:43 policy-pap | [2024-04-25T11:11:06.768+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 11:12:43 policy-pap | [] 11:12:43 policy-pap | [2024-04-25T11:11:06.769+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9eb865b0-4084-4b3e-8f19-d2c12c23c3a1","timestampMs":1714043466730,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-pap | [2024-04-25T11:11:06.770+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:12:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"9eb865b0-4084-4b3e-8f19-d2c12c23c3a1","timestampMs":1714043466730,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-pap | [2024-04-25T11:11:06.779+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:12:43 policy-pap | [2024-04-25T11:11:06.854+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate starting 11:12:43 policy-pap | [2024-04-25T11:11:06.854+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate starting listener 11:12:43 policy-pap | [2024-04-25T11:11:06.854+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate starting timer 11:12:43 policy-pap | [2024-04-25T11:11:06.855+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=438732f5-c797-488c-bd92-c5c81e74dcb8, expireMs=1714043496855] 11:12:43 policy-pap | [2024-04-25T11:11:06.857+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate starting enqueue 11:12:43 policy-pap | [2024-04-25T11:11:06.857+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=438732f5-c797-488c-bd92-c5c81e74dcb8, expireMs=1714043496855] 11:12:43 policy-pap | [2024-04-25T11:11:06.857+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate started 11:12:43 policy-pap | [2024-04-25T11:11:06.859+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"438732f5-c797-488c-bd92-c5c81e74dcb8","timestampMs":1714043466836,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:06.911+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"438732f5-c797-488c-bd92-c5c81e74dcb8","timestampMs":1714043466836,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:06.912+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"438732f5-c797-488c-bd92-c5c81e74dcb8","timestampMs":1714043466836,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:06.912+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:12:43 policy-pap | [2024-04-25T11:11:06.912+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:12:43 policy-pap | [2024-04-25T11:11:06.939+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:12:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"438732f5-c797-488c-bd92-c5c81e74dcb8","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"4f17d921-6005-4f56-9b0c-a5eef0ff196b","timestampMs":1714043466924,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:06.941+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 438732f5-c797-488c-bd92-c5c81e74dcb8 11:12:43 policy-pap | [2024-04-25T11:11:06.941+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:12:43 kafka | [2024-04-25 11:10:45,569] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,569] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,569] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,569] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,576] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,577] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,577] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,577] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,577] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,583] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,584] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,584] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,584] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,584] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,592] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,593] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,593] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,593] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,594] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,602] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,603] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,603] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,603] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,604] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,614] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,615] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,615] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,615] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,616] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,627] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,628] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,628] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,628] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,629] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,639] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"3f516921-659d-4732-a3f6-09167a83f38a","timestampMs":1714043466923,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-pap | [2024-04-25T11:11:06.944+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"438732f5-c797-488c-bd92-c5c81e74dcb8","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"4f17d921-6005-4f56-9b0c-a5eef0ff196b","timestampMs":1714043466924,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:06.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopping 11:12:43 policy-pap | [2024-04-25T11:11:06.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopping enqueue 11:12:43 policy-pap | [2024-04-25T11:11:06.962+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopping timer 11:12:43 policy-pap | [2024-04-25T11:11:06.963+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=438732f5-c797-488c-bd92-c5c81e74dcb8, expireMs=1714043496855] 11:12:43 policy-pap | [2024-04-25T11:11:06.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopping listener 11:12:43 policy-pap | [2024-04-25T11:11:06.963+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopped 11:12:43 policy-pap | [2024-04-25T11:11:06.968+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate successful 11:12:43 policy-pap | [2024-04-25T11:11:06.969+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 start publishing next request 11:12:43 policy-pap | [2024-04-25T11:11:06.969+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange starting 11:12:43 policy-pap | [2024-04-25T11:11:06.969+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange starting listener 11:12:43 policy-pap | [2024-04-25T11:11:06.969+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange starting timer 11:12:43 policy-pap | [2024-04-25T11:11:06.969+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=d46037a2-b866-47bf-a7fa-0d41ffd427a3, expireMs=1714043496969] 11:12:43 policy-pap | [2024-04-25T11:11:06.969+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange starting enqueue 11:12:43 policy-pap | [2024-04-25T11:11:06.969+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange started 11:12:43 policy-pap | [2024-04-25T11:11:06.969+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=d46037a2-b866-47bf-a7fa-0d41ffd427a3, expireMs=1714043496969] 11:12:43 policy-pap | [2024-04-25T11:11:06.971+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d46037a2-b866-47bf-a7fa-0d41ffd427a3","timestampMs":1714043466837,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:06.991+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d46037a2-b866-47bf-a7fa-0d41ffd427a3","timestampMs":1714043466837,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:06.991+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 11:12:43 policy-pap | [2024-04-25T11:11:06.996+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:12:43 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d46037a2-b866-47bf-a7fa-0d41ffd427a3","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"8709b6a1-29be-41f8-833f-68f7f04254cd","timestampMs":1714043466985,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:06.996+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d46037a2-b866-47bf-a7fa-0d41ffd427a3 11:12:43 policy-pap | [2024-04-25T11:11:07.009+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"3f516921-659d-4732-a3f6-09167a83f38a","timestampMs":1714043466923,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup"} 11:12:43 policy-pap | [2024-04-25T11:11:07.010+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:12:43 policy-pap | [2024-04-25T11:11:07.017+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d46037a2-b866-47bf-a7fa-0d41ffd427a3","timestampMs":1714043466837,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:07.017+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 11:12:43 policy-pap | [2024-04-25T11:11:07.022+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d46037a2-b866-47bf-a7fa-0d41ffd427a3","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"8709b6a1-29be-41f8-833f-68f7f04254cd","timestampMs":1714043466985,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:07.023+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange stopping 11:12:43 policy-pap | [2024-04-25T11:11:07.023+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange stopping enqueue 11:12:43 policy-pap | [2024-04-25T11:11:07.023+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange stopping timer 11:12:43 policy-pap | [2024-04-25T11:11:07.023+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=d46037a2-b866-47bf-a7fa-0d41ffd427a3, expireMs=1714043496969] 11:12:43 policy-pap | [2024-04-25T11:11:07.024+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange stopping listener 11:12:43 policy-pap | [2024-04-25T11:11:07.024+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange stopped 11:12:43 policy-pap | [2024-04-25T11:11:07.024+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpStateChange successful 11:12:43 policy-pap | [2024-04-25T11:11:07.024+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 start publishing next request 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.339034785Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.02289ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.342063535Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.343338259Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.270383ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.351046717Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.35136638Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=319.113µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.359198539Z level=info msg="Executing migration" id="create permission table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.360171438Z level=info msg="Migration successfully executed" id="create permission table" duration=972.579µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.363448392Z level=info msg="Executing migration" id="add unique index permission.role_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.36435782Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=908.309µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.368439092Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.369385412Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=946.33µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.374447103Z level=info msg="Executing migration" id="create role table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.375395892Z level=info msg="Migration successfully executed" id="create role table" duration=947.569µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.384453294Z level=info msg="Executing migration" id="add column display_name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.393640267Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.186413ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.398368495Z level=info msg="Executing migration" id="add column group_name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.404213684Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.840779ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.412224074Z level=info msg="Executing migration" id="add index role.org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.413466717Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.242593ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.416409037Z level=info msg="Executing migration" id="add unique index role_org_id_name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.417542158Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.132971ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.421247536Z level=info msg="Executing migration" id="add index role_org_id_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.422111714Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=861.248µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.427021494Z level=info msg="Executing migration" id="create team role table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.427737641Z level=info msg="Migration successfully executed" id="create team role table" duration=715.517µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.430152866Z level=info msg="Executing migration" id="add index team_role.org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.430991834Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=838.228µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.433622971Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.434519479Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=892.048µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.440710452Z level=info msg="Executing migration" id="add index team_role.team_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.442640911Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.938139ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.445763373Z level=info msg="Executing migration" id="create user role table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.446682583Z level=info msg="Migration successfully executed" id="create user role table" duration=920.58µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.450882065Z level=info msg="Executing migration" id="add index user_role.org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.451914666Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.029201ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.454694084Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.455742064Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.0468ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.464018168Z level=info msg="Executing migration" id="add index user_role.user_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.46525659Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.233842ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.470463782Z level=info msg="Executing migration" id="create builtin role table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.471350282Z level=info msg="Migration successfully executed" id="create builtin role table" duration=887.62µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.514904082Z level=info msg="Executing migration" id="add index builtin_role.role_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.516463438Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.565186ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.523474198Z level=info msg="Executing migration" id="add index builtin_role.name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.525186106Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.713948ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.531415589Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.53753233Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.121271ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.542357179Z level=info msg="Executing migration" id="add index builtin_role.org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.543224408Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=867.529µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.550487671Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.551947365Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.458834ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.557922756Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.559440322Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.523616ms 11:12:43 policy-pap | [2024-04-25T11:11:07.024+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate starting 11:12:43 policy-pap | [2024-04-25T11:11:07.025+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate starting listener 11:12:43 policy-pap | [2024-04-25T11:11:07.025+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate starting timer 11:12:43 policy-pap | [2024-04-25T11:11:07.025+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=401da80a-2685-4572-ae21-04a3b9a931b6, expireMs=1714043497025] 11:12:43 policy-pap | [2024-04-25T11:11:07.025+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate starting enqueue 11:12:43 policy-pap | [2024-04-25T11:11:07.025+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate started 11:12:43 policy-pap | [2024-04-25T11:11:07.025+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"401da80a-2685-4572-ae21-04a3b9a931b6","timestampMs":1714043467004,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:07.035+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"401da80a-2685-4572-ae21-04a3b9a931b6","timestampMs":1714043467004,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:07.036+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:12:43 policy-pap | [2024-04-25T11:11:07.039+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"source":"pap-daf76a7b-884e-46c4-ad7f-753bf9934851","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"401da80a-2685-4572-ae21-04a3b9a931b6","timestampMs":1714043467004,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:07.039+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:12:43 policy-pap | [2024-04-25T11:11:07.048+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:12:43 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"401da80a-2685-4572-ae21-04a3b9a931b6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"96008e0c-1a65-490b-9fb1-4e4b6c4cd46e","timestampMs":1714043467037,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:07.048+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:12:43 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"401da80a-2685-4572-ae21-04a3b9a931b6","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"96008e0c-1a65-490b-9fb1-4e4b6c4cd46e","timestampMs":1714043467037,"name":"apex-80113579-b8ad-4e5d-ac62-869520e19ac0","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:12:43 policy-pap | [2024-04-25T11:11:07.049+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopping 11:12:43 policy-pap | [2024-04-25T11:11:07.049+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopping enqueue 11:12:43 policy-pap | [2024-04-25T11:11:07.049+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopping timer 11:12:43 policy-pap | [2024-04-25T11:11:07.049+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=401da80a-2685-4572-ae21-04a3b9a931b6, expireMs=1714043497025] 11:12:43 policy-pap | [2024-04-25T11:11:07.049+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopping listener 11:12:43 policy-pap | [2024-04-25T11:11:07.049+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate stopped 11:12:43 policy-pap | [2024-04-25T11:11:07.049+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 401da80a-2685-4572-ae21-04a3b9a931b6 11:12:43 policy-pap | [2024-04-25T11:11:07.053+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 PdpUpdate successful 11:12:43 policy-pap | [2024-04-25T11:11:07.053+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-80113579-b8ad-4e5d-ac62-869520e19ac0 has no more requests 11:12:43 policy-pap | [2024-04-25T11:11:13.183+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 11:12:43 policy-pap | [2024-04-25T11:11:13.232+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 11:12:43 policy-pap | [2024-04-25T11:11:13.243+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 11:12:43 policy-pap | [2024-04-25T11:11:13.245+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 11:12:43 policy-pap | [2024-04-25T11:11:13.679+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:14.306+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:14.307+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:14.863+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:15.116+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 11:12:43 policy-pap | [2024-04-25T11:11:15.242+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 11:12:43 policy-pap | [2024-04-25T11:11:15.242+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:15.243+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:15.260+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T11:11:15Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T11:11:15Z, user=policyadmin)] 11:12:43 policy-pap | [2024-04-25T11:11:16.013+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:16.015+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-3] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 11:12:43 policy-pap | [2024-04-25T11:11:16.015+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering an undeploy for policy onap.restart.tca 1.0.0 11:12:43 policy-pap | [2024-04-25T11:11:16.015+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:16.016+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:16.029+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T11:11:16Z, user=policyadmin)] 11:12:43 kafka | [2024-04-25 11:10:45,640] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,640] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,640] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,641] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,653] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,654] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,654] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,654] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,655] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,662] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,663] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,663] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,663] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,663] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,673] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,674] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,674] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,674] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,674] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,684] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,685] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,685] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,685] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,685] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,695] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,696] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,696] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,696] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,696] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,709] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,710] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,710] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,710] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,710] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.565676454Z level=info msg="Executing migration" id="add unique index role.uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.566834357Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.157993ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.571454793Z level=info msg="Executing migration" id="create seed assignment table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.57208908Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=633.707µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.57711067Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.578513185Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.396804ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.58407458Z level=info msg="Executing migration" id="add column hidden to role table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.592422655Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.350595ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.596221513Z level=info msg="Executing migration" id="permission kind migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.604446706Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.224643ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.657922196Z level=info msg="Executing migration" id="permission attribute migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.666870086Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.96803ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.670496144Z level=info msg="Executing migration" id="permission identifier migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.678538365Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.037831ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.683944859Z level=info msg="Executing migration" id="add permission identifier index" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.685480195Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.535316ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.692040861Z level=info msg="Executing migration" id="add permission action scope role_id index" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.693393685Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.351954ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.716007283Z level=info msg="Executing migration" id="remove permission role_id action scope index" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.717929423Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.92157ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.722794872Z level=info msg="Executing migration" id="create query_history table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.724039575Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.243623ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.728674291Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.735574601Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=6.89965ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.739876714Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.739975035Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=101.721µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.747035277Z level=info msg="Executing migration" id="rbac disabled migrator" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.747128218Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=99.581µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.753540892Z level=info msg="Executing migration" id="teams permissions migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.754108559Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=568.057µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.758569263Z level=info msg="Executing migration" id="dashboard permissions" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.75925228Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=684.687µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.765584494Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.766244231Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=656.207µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.770826467Z level=info msg="Executing migration" id="drop managed folder create actions" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.7710446Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=218.833µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.777174782Z level=info msg="Executing migration" id="alerting notification permissions" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.777733047Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=554.245µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.781509475Z level=info msg="Executing migration" id="create query_history_star table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.782492735Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=982.12µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.787513535Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.788611157Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.096502ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.792660678Z level=info msg="Executing migration" id="add column org_id in query_history_star" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.801522637Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.856609ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.805322666Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.805406517Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=85.241µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.810886532Z level=info msg="Executing migration" id="create correlation table v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.812021113Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.136621ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.818415527Z level=info msg="Executing migration" id="add index correlations.uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.819474239Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.058672ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.822232417Z level=info msg="Executing migration" id="add index correlations.source_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.823392698Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.158921ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.832920305Z level=info msg="Executing migration" id="add correlation config column" 11:12:43 kafka | [2024-04-25 11:10:45,770] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,772] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,772] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,772] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,772] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,778] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,781] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,781] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,781] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,781] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,790] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,791] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,791] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,791] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,791] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,797] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,798] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,798] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,798] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,798] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,809] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,811] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,811] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,811] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,811] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,820] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,821] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,821] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,821] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,822] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,842] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,843] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,844] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,844] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,844] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.845610622Z level=info msg="Migration successfully executed" id="add correlation config column" duration=12.681077ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.850796195Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.85225775Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.461035ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.857318251Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.858815926Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.502205ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.863968888Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.885803389Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=21.835261ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.89093761Z level=info msg="Executing migration" id="create correlation v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.893176983Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.240863ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.899422156Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.900323556Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=902.4µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.904905771Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.906258366Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.348314ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.912884822Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.914725971Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.840659ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.920418679Z level=info msg="Executing migration" id="copy correlation v1 to v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.920714672Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=301.013µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.923420399Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.924419828Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=998.619µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.95523924Z level=info msg="Executing migration" id="add provisioning column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.964666676Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.429876ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.970420234Z level=info msg="Executing migration" id="create entity_events table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.971488144Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.02945ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.975759778Z level=info msg="Executing migration" id="create dashboard public config v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.977278103Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.517715ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.983460655Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.984159863Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.988759088Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.989455466Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.995352846Z level=info msg="Executing migration" id="Drop old dashboard public config table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:03.997086043Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.738967ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.003442808Z level=info msg="Executing migration" id="recreate dashboard public config v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.004683041Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.240053ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.011025857Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.012224744Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.202017ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.01651687Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.017705377Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.187577ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.023034488Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.031730454Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=8.696696ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.036107652Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.037271628Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.167426ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.043136656Z level=info msg="Executing migration" id="Drop public config table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.043930107Z level=info msg="Migration successfully executed" id="Drop public config table" duration=792.861µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.051125443Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.052336049Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.210236ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.05684903Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.057690231Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=841.521µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.062913731Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.064803246Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.888395ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.069836604Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 11:12:43 kafka | [2024-04-25 11:10:45,851] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,852] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,853] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,853] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,853] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,868] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,869] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,869] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,869] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,869] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,877] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,877] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,877] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,877] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,877] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,911] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,912] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,912] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,912] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,912] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,920] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,921] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,921] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,921] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,921] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,930] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,931] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,931] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,931] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,931] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,937] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,938] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,938] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,938] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,938] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(g2fO5llxRxC6H0s6V7OP0w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,947] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,948] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,948] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,948] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,948] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,954] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,955] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,955] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,955] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,955] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,969] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,969] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,970] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,970] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,970] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,978] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,978] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,978] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,978] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,978] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,986] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,986] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,986] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,986] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,986] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,991] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,992] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,992] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,992] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,992] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:45,998] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:45,999] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:45,999] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.071593787Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.757183ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.078237886Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.105887978Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=27.636301ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.110786644Z level=info msg="Executing migration" id="add annotations_enabled column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.118449617Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.662533ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.121177893Z level=info msg="Executing migration" id="add time_selection_enabled column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.129704628Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.525894ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.133231675Z level=info msg="Executing migration" id="delete orphaned public dashboards" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.133442558Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=208.372µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.139441788Z level=info msg="Executing migration" id="add share column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.150974072Z level=info msg="Migration successfully executed" id="add share column" duration=11.530604ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.18053386Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.180784433Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=250.313µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.184575834Z level=info msg="Executing migration" id="create file table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.186053824Z level=info msg="Migration successfully executed" id="create file table" duration=1.47653ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.191435126Z level=info msg="Executing migration" id="file table idx: path natural pk" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.193157449Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.721633ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.199283671Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.200332755Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.049504ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.204027805Z level=info msg="Executing migration" id="create file_meta table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.205304312Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.273297ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.209562339Z level=info msg="Executing migration" id="file table idx: path key" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.211337193Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.768244ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.21788029Z level=info msg="Executing migration" id="set path collation in file table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.217959211Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=79.611µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.224607061Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.224729953Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=123.902µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.227803564Z level=info msg="Executing migration" id="managed permissions migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.228709686Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=906.072µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.234005597Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.23423159Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=226.083µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.238932764Z level=info msg="Executing migration" id="RBAC action name migrator" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.240252891Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.319877ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.245116546Z level=info msg="Executing migration" id="Add UID column to playlist" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.254138667Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.021671ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.258896891Z level=info msg="Executing migration" id="Update uid column values in playlist" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.259111714Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=214.793µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.264722599Z level=info msg="Executing migration" id="Add index for uid in playlist" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.266093857Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.371728ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.26993251Z level=info msg="Executing migration" id="update group index for alert rules" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.270437916Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=506.156µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.272681907Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.27297454Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=291.843µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.276964473Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.277521261Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=559.168µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.280938837Z level=info msg="Executing migration" id="add action column to seed_assignment" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.292030836Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.094209ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.295578374Z level=info msg="Executing migration" id="add scope column to seed_assignment" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.30201377Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.435906ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.30647162Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.307661955Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.190195ms 11:12:43 kafka | [2024-04-25 11:10:45,999] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:45,999] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,004] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,005] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,005] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,005] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,005] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,011] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,011] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,012] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,012] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,012] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,018] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,018] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,018] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,018] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,018] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,027] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,028] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,028] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,028] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,028] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,037] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,038] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,038] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,038] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,038] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,050] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,051] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,051] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,051] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,051] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,061] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,062] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,062] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,062] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 policy-pap | [2024-04-25T11:11:16.358+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 11:12:43 policy-pap | [2024-04-25T11:11:16.358+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:16.359+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 11:12:43 policy-pap | [2024-04-25T11:11:16.359+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 11:12:43 policy-pap | [2024-04-25T11:11:16.359+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:16.360+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:16.369+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T11:11:16Z, user=policyadmin)] 11:12:43 policy-pap | [2024-04-25T11:11:36.855+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=438732f5-c797-488c-bd92-c5c81e74dcb8, expireMs=1714043496855] 11:12:43 policy-pap | [2024-04-25T11:11:36.970+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=d46037a2-b866-47bf-a7fa-0d41ffd427a3, expireMs=1714043496969] 11:12:43 policy-pap | [2024-04-25T11:11:36.973+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 11:12:43 policy-pap | [2024-04-25T11:11:36.974+00:00|INFO|SessionData|http-nio-6969-exec-10] deleting DB group testGroup 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.310965879Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.402374997Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=91.399567ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.475887533Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.478299976Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.417743ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.489265023Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.490410768Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.149605ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.510363885Z level=info msg="Executing migration" id="add primary key to seed_assigment" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.536382085Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.01638ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.543637482Z level=info msg="Executing migration" id="add origin column to seed_assignment" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.556481595Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=12.840763ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.56358133Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.564368401Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=793.832µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.567603874Z level=info msg="Executing migration" id="prevent seeding OnCall access" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.567907198Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=307.034µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.571522146Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.57183605Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=313.074µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.575980947Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.576285251Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=303.924µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.584243958Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.584893256Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=650.008µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.589158674Z level=info msg="Executing migration" id="create folder table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.590270828Z level=info msg="Migration successfully executed" id="create folder table" duration=1.112444ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.594655938Z level=info msg="Executing migration" id="Add index for parent_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.597080359Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.435382ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.600546996Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.601993305Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.446099ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.60529365Z level=info msg="Executing migration" id="Update folder title length" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.60532903Z level=info msg="Migration successfully executed" id="Update folder title length" duration=35.99µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.610807024Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.612290844Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.48413ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.616770734Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.618023521Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.253307ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.620684557Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.622030654Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.345918ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.627261044Z level=info msg="Executing migration" id="Sync dashboard and folder table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.627963674Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=701.539µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.633041912Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.633935975Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=893.993µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.64108735Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.642484259Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.398459ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.646009076Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.648204166Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.19489ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.651434559Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.652680716Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.246227ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.660361829Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.661738298Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.376609ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.665303726Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.667241211Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.937195ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.671052003Z level=info msg="Executing migration" id="create anon_device table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.672149057Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.096754ms 11:12:43 kafka | [2024-04-25 11:10:46,062] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,132] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,133] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,133] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,133] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,133] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,143] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:12:43 kafka | [2024-04-25 11:10:46,144] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:12:43 kafka | [2024-04-25 11:10:46,144] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,144] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 11:12:43 kafka | [2024-04-25 11:10:46,144] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(VI4acjwHSb2Uel08QoceSw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.679570637Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.681676185Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.102818ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.688197943Z level=info msg="Executing migration" id="add index anon_device.updated_at" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.691569518Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=3.361566ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.698688144Z level=info msg="Executing migration" id="create signing_key table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.699834919Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.139294ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.704601303Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.706525088Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.924185ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.710025036Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.712211305Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.185559ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.719888848Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.720295253Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=407.315µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.724176656Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.742143246Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=17.93937ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.747163404Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.748151407Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=988.593µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.754649355Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.757980189Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=3.319404ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.76329006Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.764694309Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.399849ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.767677169Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.769012627Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.335068ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.775631966Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.777085186Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.45267ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.782327936Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.783889757Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.562021ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.78932295Z level=info msg="Executing migration" id="create sso_setting table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.790944032Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.619762ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.798600414Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.799535037Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=937.103µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.805286424Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.806319788Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=1.023774ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.814864462Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.815015474Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=156.062µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.84080831Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.848422053Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=7.618843ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.864306866Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.875689399Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=11.362043ms 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.881392925Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.88178381Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=398.175µs 11:12:43 grafana | logger=migrator t=2024-04-25T11:10:04.885215666Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.50926614s 11:12:43 grafana | logger=sqlstore t=2024-04-25T11:10:04.897016035Z level=info msg="Created default admin" user=admin 11:12:43 grafana | logger=sqlstore t=2024-04-25T11:10:04.89733129Z level=info msg="Created default organization" 11:12:43 grafana | logger=secrets t=2024-04-25T11:10:04.90334024Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 11:12:43 grafana | logger=plugin.store t=2024-04-25T11:10:04.927746427Z level=info msg="Loading plugins..." 11:12:43 grafana | logger=local.finder t=2024-04-25T11:10:04.986685789Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 11:12:43 grafana | logger=plugin.store t=2024-04-25T11:10:04.986727639Z level=info msg="Plugins loaded" count=55 duration=58.981752ms 11:12:43 grafana | logger=query_data t=2024-04-25T11:10:04.989846511Z level=info msg="Query Service initialization" 11:12:43 grafana | logger=live.push_http t=2024-04-25T11:10:05.012064218Z level=info msg="Live Push Gateway initialization" 11:12:43 grafana | logger=ngalert.migration t=2024-04-25T11:10:05.022127403Z level=info msg=Starting 11:12:43 grafana | logger=ngalert.migration t=2024-04-25T11:10:05.022844001Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 11:12:43 grafana | logger=ngalert.migration orgID=1 t=2024-04-25T11:10:05.023228107Z level=info msg="Migrating alerts for organisation" 11:12:43 grafana | logger=ngalert.migration orgID=1 t=2024-04-25T11:10:05.023848384Z level=info msg="Alerts found to migrate" alerts=0 11:12:43 grafana | logger=ngalert.migration t=2024-04-25T11:10:05.025517275Z level=info msg="Completed alerting migration" 11:12:43 grafana | logger=ngalert.state.manager t=2024-04-25T11:10:05.059891711Z level=info msg="Running in alternative execution of Error/NoData mode" 11:12:43 grafana | logger=infra.usagestats.collector t=2024-04-25T11:10:05.061796314Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 11:12:43 grafana | logger=provisioning.datasources t=2024-04-25T11:10:05.064629329Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 11:12:43 grafana | logger=provisioning.alerting t=2024-04-25T11:10:05.081934044Z level=info msg="starting to provision alerting" 11:12:43 grafana | logger=provisioning.alerting t=2024-04-25T11:10:05.081974675Z level=info msg="finished to provision alerting" 11:12:43 grafana | logger=ngalert.state.manager t=2024-04-25T11:10:05.082223828Z level=info msg="Warming state cache for startup" 11:12:43 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-25T11:10:05.082567932Z level=info msg="Starting MultiOrg Alertmanager" 11:12:43 grafana | logger=ngalert.state.manager t=2024-04-25T11:10:05.082993097Z level=info msg="State cache has been initialized" states=0 duration=765.119µs 11:12:43 grafana | logger=ngalert.scheduler t=2024-04-25T11:10:05.083047237Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 11:12:43 grafana | logger=ticker t=2024-04-25T11:10:05.083110538Z level=info msg=starting first_tick=2024-04-25T11:10:10Z 11:12:43 grafana | logger=grafanaStorageLogger t=2024-04-25T11:10:05.087899707Z level=info msg="Storage starting" 11:12:43 grafana | logger=http.server t=2024-04-25T11:10:05.089414125Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 11:12:43 grafana | logger=provisioning.dashboard t=2024-04-25T11:10:05.154379323Z level=info msg="starting to provision dashboards" 11:12:43 grafana | logger=sqlstore.transactions t=2024-04-25T11:10:05.266733033Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 11:12:43 grafana | logger=grafana-apiserver t=2024-04-25T11:10:05.533107166Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 11:12:43 grafana | logger=grafana-apiserver t=2024-04-25T11:10:05.533966636Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 11:12:43 grafana | logger=plugins.update.checker t=2024-04-25T11:10:05.564834535Z level=info msg="Update check succeeded" duration=477.210011ms 11:12:43 grafana | logger=provisioning.dashboard t=2024-04-25T11:10:05.564992757Z level=info msg="finished to provision dashboards" 11:12:43 grafana | logger=grafana.update.checker t=2024-04-25T11:10:05.565542544Z level=info msg="Update check succeeded" duration=477.794848ms 11:12:43 grafana | logger=infra.usagestats t=2024-04-25T11:10:45.097824651Z level=info msg="Usage stats are ready to report" 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,149] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,158] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,159] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,160] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,161] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,168] INFO [Broker id=1] Finished LeaderAndIsr request in 820ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,174] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=VI4acjwHSb2Uel08QoceSw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=g2fO5llxRxC6H0s6V7OP0w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,175] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 15 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,177] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,178] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,179] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,180] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,180] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,180] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,180] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,180] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,180] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,181] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 21 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,181] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,181] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,181] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,181] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,181] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,181] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,182] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 21 milliseconds for epoch 0, of which 21 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,182] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,182] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,182] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,182] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,182] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,183] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,183] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,183] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,183] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,183] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 22 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 23 milliseconds for epoch 0, of which 22 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 23 milliseconds for epoch 0, of which 23 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 24 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 25 milliseconds for epoch 0, of which 24 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 25 milliseconds for epoch 0, of which 25 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:12:43 kafka | [2024-04-25 11:10:46,188] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,189] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,189] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,189] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,190] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,192] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,193] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,194] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,196] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,196] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:12:43 kafka | [2024-04-25 11:10:46,358] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-e708bf9a-aba0-402e-860d-b682878f7611 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,358] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 6f727b00-63f5-4665-9483-d1a4468f597f in Empty state. Created a new member id consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3-800d0e89-fd79-4b76-8fc5-5d0e2b9be0e7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,384] INFO [GroupCoordinator 1]: Preparing to rebalance group 6f727b00-63f5-4665-9483-d1a4468f597f in state PreparingRebalance with old generation 0 (__consumer_offsets-10) (reason: Adding new member consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3-800d0e89-fd79-4b76-8fc5-5d0e2b9be0e7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:46,384] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-e708bf9a-aba0-402e-860d-b682878f7611 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:47,204] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 51f148c3-bcf8-4571-938a-66df08a6d568 in Empty state. Created a new member id consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2-3fcc063a-473e-44ee-8cdf-b3d14e063106 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:47,208] INFO [GroupCoordinator 1]: Preparing to rebalance group 51f148c3-bcf8-4571-938a-66df08a6d568 in state PreparingRebalance with old generation 0 (__consumer_offsets-18) (reason: Adding new member consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2-3fcc063a-473e-44ee-8cdf-b3d14e063106 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:49,397] INFO [GroupCoordinator 1]: Stabilized group 6f727b00-63f5-4665-9483-d1a4468f597f generation 1 (__consumer_offsets-10) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:49,401] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:49,422] INFO [GroupCoordinator 1]: Assignment received from leader consumer-6f727b00-63f5-4665-9483-d1a4468f597f-3-800d0e89-fd79-4b76-8fc5-5d0e2b9be0e7 for group 6f727b00-63f5-4665-9483-d1a4468f597f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:49,422] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-e708bf9a-aba0-402e-860d-b682878f7611 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:50,209] INFO [GroupCoordinator 1]: Stabilized group 51f148c3-bcf8-4571-938a-66df08a6d568 generation 1 (__consumer_offsets-18) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:12:43 kafka | [2024-04-25 11:10:50,226] INFO [GroupCoordinator 1]: Assignment received from leader consumer-51f148c3-bcf8-4571-938a-66df08a6d568-2-3fcc063a-473e-44ee-8cdf-b3d14e063106 for group 51f148c3-bcf8-4571-938a-66df08a6d568 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:12:43 ++ echo 'Tearing down containers...' 11:12:43 Tearing down containers... 11:12:43 ++ docker-compose down -v --remove-orphans 11:12:44 Stopping policy-apex-pdp ... 11:12:44 Stopping policy-pap ... 11:12:44 Stopping policy-api ... 11:12:44 Stopping grafana ... 11:12:44 Stopping kafka ... 11:12:44 Stopping mariadb ... 11:12:44 Stopping prometheus ... 11:12:44 Stopping simulator ... 11:12:44 Stopping zookeeper ... 11:12:44 Stopping grafana ... done 11:12:45 Stopping prometheus ... done 11:12:54 Stopping policy-apex-pdp ... done 11:13:05 Stopping simulator ... done 11:13:05 Stopping policy-pap ... done 11:13:05 Stopping mariadb ... done 11:13:06 Stopping kafka ... done 11:13:06 Stopping zookeeper ... done 11:13:15 Stopping policy-api ... done 11:13:15 Removing policy-apex-pdp ... 11:13:15 Removing policy-pap ... 11:13:15 Removing policy-api ... 11:13:15 Removing policy-db-migrator ... 11:13:15 Removing grafana ... 11:13:15 Removing kafka ... 11:13:15 Removing mariadb ... 11:13:15 Removing prometheus ... 11:13:15 Removing simulator ... 11:13:15 Removing zookeeper ... 11:13:15 Removing policy-apex-pdp ... done 11:13:15 Removing policy-api ... done 11:13:15 Removing policy-db-migrator ... done 11:13:15 Removing simulator ... done 11:13:15 Removing grafana ... done 11:13:15 Removing policy-pap ... done 11:13:15 Removing mariadb ... done 11:13:15 Removing prometheus ... done 11:13:15 Removing kafka ... done 11:13:15 Removing zookeeper ... done 11:13:15 Removing network compose_default 11:13:15 ++ cd /w/workspace/policy-pap-master-project-csit-pap 11:13:15 + load_set 11:13:15 + _setopts=hxB 11:13:15 ++ tr : ' ' 11:13:15 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:13:15 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:13:15 + set +o braceexpand 11:13:15 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:13:15 + set +o hashall 11:13:15 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:13:15 + set +o interactive-comments 11:13:15 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:13:15 + set +o xtrace 11:13:15 ++ echo hxB 11:13:15 ++ sed 's/./& /g' 11:13:15 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:13:15 + set +h 11:13:15 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:13:15 + set +x 11:13:15 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:13:15 + [[ -n /tmp/tmp.t5Zgostjf0 ]] 11:13:15 + rsync -av /tmp/tmp.t5Zgostjf0/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:13:15 sending incremental file list 11:13:15 ./ 11:13:15 log.html 11:13:15 output.xml 11:13:15 report.html 11:13:15 testplan.txt 11:13:15 11:13:15 sent 918,475 bytes received 95 bytes 1,837,140.00 bytes/sec 11:13:15 total size is 917,930 speedup is 1.00 11:13:15 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 11:13:15 + exit 0 11:13:15 $ ssh-agent -k 11:13:15 unset SSH_AUTH_SOCK; 11:13:15 unset SSH_AGENT_PID; 11:13:15 echo Agent pid 2096 killed; 11:13:15 [ssh-agent] Stopped. 11:13:15 Robot results publisher started... 11:13:15 INFO: Checking test criticality is deprecated and will be dropped in a future release! 11:13:15 -Parsing output xml: 11:13:16 Done! 11:13:16 WARNING! Could not find file: **/log.html 11:13:16 WARNING! Could not find file: **/report.html 11:13:16 -Copying log files to build dir: 11:13:16 Done! 11:13:16 -Assigning results to build: 11:13:16 Done! 11:13:16 -Checking thresholds: 11:13:16 Done! 11:13:16 Done publishing Robot results. 11:13:16 [PostBuildScript] - [INFO] Executing post build scripts. 11:13:16 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10581357481169707533.sh 11:13:16 ---> sysstat.sh 11:13:17 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4326023689404030001.sh 11:13:17 ---> package-listing.sh 11:13:17 ++ facter osfamily 11:13:17 ++ tr '[:upper:]' '[:lower:]' 11:13:17 + OS_FAMILY=debian 11:13:17 + workspace=/w/workspace/policy-pap-master-project-csit-pap 11:13:17 + START_PACKAGES=/tmp/packages_start.txt 11:13:17 + END_PACKAGES=/tmp/packages_end.txt 11:13:17 + DIFF_PACKAGES=/tmp/packages_diff.txt 11:13:17 + PACKAGES=/tmp/packages_start.txt 11:13:17 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 11:13:17 + PACKAGES=/tmp/packages_end.txt 11:13:17 + case "${OS_FAMILY}" in 11:13:17 + dpkg -l 11:13:17 + grep '^ii' 11:13:17 + '[' -f /tmp/packages_start.txt ']' 11:13:17 + '[' -f /tmp/packages_end.txt ']' 11:13:17 + diff /tmp/packages_start.txt /tmp/packages_end.txt 11:13:17 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 11:13:17 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 11:13:17 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 11:13:17 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7763392737845642067.sh 11:13:17 ---> capture-instance-metadata.sh 11:13:17 Setup pyenv: 11:13:17 system 11:13:17 3.8.13 11:13:17 3.9.13 11:13:17 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 11:13:17 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-o6aK from file:/tmp/.os_lf_venv 11:13:18 lf-activate-venv(): INFO: Installing: lftools 11:13:29 lf-activate-venv(): INFO: Adding /tmp/venv-o6aK/bin to PATH 11:13:30 INFO: Running in OpenStack, capturing instance metadata 11:13:30 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13995492243417011848.sh 11:13:30 provisioning config files... 11:13:30 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config5607851719939574347tmp 11:13:30 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 11:13:30 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 11:13:30 [EnvInject] - Injecting environment variables from a build step. 11:13:30 [EnvInject] - Injecting as environment variables the properties content 11:13:30 SERVER_ID=logs 11:13:30 11:13:30 [EnvInject] - Variables injected successfully. 11:13:30 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1158909934708005941.sh 11:13:30 ---> create-netrc.sh 11:13:30 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7076032529671196523.sh 11:13:30 ---> python-tools-install.sh 11:13:30 Setup pyenv: 11:13:30 system 11:13:30 3.8.13 11:13:30 3.9.13 11:13:30 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 11:13:30 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-o6aK from file:/tmp/.os_lf_venv 11:13:32 lf-activate-venv(): INFO: Installing: lftools 11:13:40 lf-activate-venv(): INFO: Adding /tmp/venv-o6aK/bin to PATH 11:13:40 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12241438573306393653.sh 11:13:40 ---> sudo-logs.sh 11:13:40 Archiving 'sudo' log.. 11:13:40 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14954025051378753774.sh 11:13:40 ---> job-cost.sh 11:13:40 Setup pyenv: 11:13:40 system 11:13:40 3.8.13 11:13:40 3.9.13 11:13:40 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 11:13:40 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-o6aK from file:/tmp/.os_lf_venv 11:13:42 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 11:13:46 lf-activate-venv(): INFO: Adding /tmp/venv-o6aK/bin to PATH 11:13:46 INFO: No Stack... 11:13:46 INFO: Retrieving Pricing Info for: v3-standard-8 11:13:47 INFO: Archiving Costs 11:13:47 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins4865724836313490042.sh 11:13:47 ---> logs-deploy.sh 11:13:47 Setup pyenv: 11:13:47 system 11:13:47 3.8.13 11:13:47 3.9.13 11:13:47 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 11:13:47 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-o6aK from file:/tmp/.os_lf_venv 11:13:48 lf-activate-venv(): INFO: Installing: lftools 11:13:57 lf-activate-venv(): INFO: Adding /tmp/venv-o6aK/bin to PATH 11:13:57 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1661 11:13:57 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 11:13:58 Archives upload complete. 11:13:58 INFO: archiving logs to Nexus 11:13:59 ---> uname -a: 11:13:59 Linux prd-ubuntu1804-docker-8c-8g-26003 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 11:13:59 11:13:59 11:13:59 ---> lscpu: 11:13:59 Architecture: x86_64 11:13:59 CPU op-mode(s): 32-bit, 64-bit 11:13:59 Byte Order: Little Endian 11:13:59 CPU(s): 8 11:13:59 On-line CPU(s) list: 0-7 11:13:59 Thread(s) per core: 1 11:13:59 Core(s) per socket: 1 11:13:59 Socket(s): 8 11:13:59 NUMA node(s): 1 11:13:59 Vendor ID: AuthenticAMD 11:13:59 CPU family: 23 11:13:59 Model: 49 11:13:59 Model name: AMD EPYC-Rome Processor 11:13:59 Stepping: 0 11:13:59 CPU MHz: 2799.998 11:13:59 BogoMIPS: 5599.99 11:13:59 Virtualization: AMD-V 11:13:59 Hypervisor vendor: KVM 11:13:59 Virtualization type: full 11:13:59 L1d cache: 32K 11:13:59 L1i cache: 32K 11:13:59 L2 cache: 512K 11:13:59 L3 cache: 16384K 11:13:59 NUMA node0 CPU(s): 0-7 11:13:59 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 11:13:59 11:13:59 11:13:59 ---> nproc: 11:13:59 8 11:13:59 11:13:59 11:13:59 ---> df -h: 11:13:59 Filesystem Size Used Avail Use% Mounted on 11:13:59 udev 16G 0 16G 0% /dev 11:13:59 tmpfs 3.2G 708K 3.2G 1% /run 11:13:59 /dev/vda1 155G 14G 142G 9% / 11:13:59 tmpfs 16G 0 16G 0% /dev/shm 11:13:59 tmpfs 5.0M 0 5.0M 0% /run/lock 11:13:59 tmpfs 16G 0 16G 0% /sys/fs/cgroup 11:13:59 /dev/vda15 105M 4.4M 100M 5% /boot/efi 11:13:59 tmpfs 3.2G 0 3.2G 0% /run/user/1001 11:13:59 11:13:59 11:13:59 ---> free -m: 11:13:59 total used free shared buff/cache available 11:13:59 Mem: 32167 853 25162 0 6151 30858 11:13:59 Swap: 1023 0 1023 11:13:59 11:13:59 11:13:59 ---> ip addr: 11:13:59 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 11:13:59 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 11:13:59 inet 127.0.0.1/8 scope host lo 11:13:59 valid_lft forever preferred_lft forever 11:13:59 inet6 ::1/128 scope host 11:13:59 valid_lft forever preferred_lft forever 11:13:59 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 11:13:59 link/ether fa:16:3e:8a:01:b7 brd ff:ff:ff:ff:ff:ff 11:13:59 inet 10.30.106.102/23 brd 10.30.107.255 scope global dynamic ens3 11:13:59 valid_lft 85918sec preferred_lft 85918sec 11:13:59 inet6 fe80::f816:3eff:fe8a:1b7/64 scope link 11:13:59 valid_lft forever preferred_lft forever 11:13:59 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 11:13:59 link/ether 02:42:28:08:a1:3e brd ff:ff:ff:ff:ff:ff 11:13:59 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 11:13:59 valid_lft forever preferred_lft forever 11:13:59 11:13:59 11:13:59 ---> sar -b -r -n DEV: 11:13:59 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-26003) 04/25/24 _x86_64_ (8 CPU) 11:13:59 11:13:59 11:06:00 LINUX RESTART (8 CPU) 11:13:59 11:13:59 11:07:01 tps rtps wtps bread/s bwrtn/s 11:13:59 11:08:01 122.93 27.43 95.50 2079.52 28218.23 11:13:59 11:09:01 112.33 9.42 102.92 1647.59 30254.82 11:13:59 11:10:01 273.57 3.57 270.00 405.93 142840.73 11:13:59 11:11:01 262.13 10.46 251.67 425.46 27071.83 11:13:59 11:12:01 18.51 0.00 18.51 0.00 18439.94 11:13:59 11:13:01 25.26 0.03 25.23 4.40 19413.03 11:13:59 Average: 135.79 8.48 127.31 760.47 44372.62 11:13:59 11:13:59 11:07:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:13:59 11:08:01 30019204 31730624 2920016 8.86 78240 1936296 1414420 4.16 848488 1767236 126176 11:13:59 11:09:01 27223432 31643688 5715788 17.35 119228 4510148 1606984 4.73 1024720 4247688 2398628 11:13:59 11:10:01 25571136 31416912 7368084 22.37 144440 5823164 4560984 13.42 1297488 5510576 860 11:13:59 11:11:01 23445288 29431644 9493932 28.82 157052 5938384 9089420 26.74 3450072 5438820 1808 11:13:59 11:12:01 23494852 29481944 9444368 28.67 157216 5938664 8954932 26.35 3402916 5437408 236 11:13:59 11:13:01 23729600 29741996 9209620 27.96 157648 5966652 7346744 21.62 3164524 5451512 204 11:13:59 Average: 25580585 30574468 7358635 22.34 135637 5018885 5495581 16.17 2198035 4642207 421319 11:13:59 11:13:59 11:07:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:13:59 11:08:01 ens3 56.16 39.54 893.71 9.42 0.00 0.00 0.00 0.00 11:13:59 11:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:08:01 lo 1.27 1.27 0.15 0.15 0.00 0.00 0.00 0.00 11:13:59 11:09:01 ens3 755.24 385.92 17568.63 29.97 0.00 0.00 0.00 0.00 11:13:59 11:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:09:01 br-8c3a6fe6fdbe 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:09:01 lo 9.00 9.00 0.88 0.88 0.00 0.00 0.00 0.00 11:13:59 11:10:01 veth79277a5 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:10:01 ens3 416.86 207.43 13588.12 14.79 0.00 0.00 0.00 0.00 11:13:59 11:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:10:01 veth6d21dd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:11:01 veth79277a5 0.52 0.85 0.06 0.31 0.00 0.00 0.00 0.00 11:13:59 11:11:01 ens3 8.48 5.88 2.21 1.80 0.00 0.00 0.00 0.00 11:13:59 11:11:01 veth3dde489 2.20 2.55 0.41 0.23 0.00 0.00 0.00 0.00 11:13:59 11:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:12:01 veth79277a5 0.23 0.15 0.02 0.01 0.00 0.00 0.00 0.00 11:13:59 11:12:01 ens3 3.35 2.73 0.65 0.57 0.00 0.00 0.00 0.00 11:13:59 11:12:01 veth3dde489 3.72 5.20 0.77 0.47 0.00 0.00 0.00 0.00 11:13:59 11:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:13:01 ens3 17.88 15.55 6.04 16.73 0.00 0.00 0.00 0.00 11:13:59 11:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 11:13:01 vethab75b87 46.68 40.88 18.37 39.97 0.00 0.00 0.00 0.00 11:13:59 11:13:01 veth6d21dd6 100.48 123.20 79.38 32.96 0.00 0.00 0.00 0.01 11:13:59 Average: ens3 209.66 109.51 5343.08 12.21 0.00 0.00 0.00 0.00 11:13:59 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:13:59 Average: vethab75b87 7.78 6.81 3.06 6.66 0.00 0.00 0.00 0.00 11:13:59 Average: veth6d21dd6 16.75 20.53 13.23 5.49 0.00 0.00 0.00 0.00 11:13:59 11:13:59 11:13:59 ---> sar -P ALL: 11:13:59 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-26003) 04/25/24 _x86_64_ (8 CPU) 11:13:59 11:13:59 11:06:00 LINUX RESTART (8 CPU) 11:13:59 11:13:59 11:07:01 CPU %user %nice %system %iowait %steal %idle 11:13:59 11:08:01 all 9.82 0.00 0.83 2.66 0.03 86.66 11:13:59 11:08:01 0 1.15 0.00 0.38 0.27 0.00 98.20 11:13:59 11:08:01 1 2.43 0.00 0.28 2.09 0.02 95.18 11:13:59 11:08:01 2 2.22 0.00 0.62 0.70 0.00 96.46 11:13:59 11:08:01 3 3.08 0.00 0.63 0.12 0.00 96.17 11:13:59 11:08:01 4 20.64 0.00 1.47 15.52 0.10 62.27 11:13:59 11:08:01 5 14.13 0.00 0.85 0.55 0.03 84.43 11:13:59 11:08:01 6 24.60 0.00 1.37 1.00 0.05 72.98 11:13:59 11:08:01 7 10.36 0.00 1.04 1.07 0.03 87.51 11:13:59 11:09:01 all 13.10 0.00 3.86 2.44 0.07 80.54 11:13:59 11:09:01 0 11.95 0.00 4.45 1.08 0.05 82.47 11:13:59 11:09:01 1 5.80 0.00 3.92 0.10 0.05 90.12 11:13:59 11:09:01 2 8.96 0.00 3.71 0.08 0.07 87.18 11:13:59 11:09:01 3 30.97 0.00 5.33 5.87 0.10 57.73 11:13:59 11:09:01 4 9.19 0.00 2.92 11.06 0.07 76.77 11:13:59 11:09:01 5 16.06 0.00 4.43 0.54 0.07 78.90 11:13:59 11:09:01 6 12.26 0.00 2.84 0.41 0.05 84.45 11:13:59 11:09:01 7 9.69 0.00 3.26 0.37 0.05 86.64 11:13:59 11:10:01 all 9.18 0.00 4.01 9.96 0.06 76.79 11:13:59 11:10:01 0 9.01 0.00 3.54 0.34 0.07 87.04 11:13:59 11:10:01 1 8.19 0.00 4.21 5.41 0.07 82.11 11:13:59 11:10:01 2 10.46 0.00 3.92 2.27 0.05 83.31 11:13:59 11:10:01 3 8.18 0.00 3.11 1.53 0.05 87.13 11:13:59 11:10:01 4 11.81 0.00 3.67 20.24 0.05 64.23 11:13:59 11:10:01 5 8.33 0.00 4.43 4.21 0.05 82.99 11:13:59 11:10:01 6 8.92 0.00 4.26 11.19 0.05 75.57 11:13:59 11:10:01 7 8.51 0.00 4.87 34.78 0.09 51.75 11:13:59 11:11:01 all 28.62 0.00 4.23 2.06 0.11 64.99 11:13:59 11:11:01 0 27.95 0.00 4.37 1.18 0.12 66.39 11:13:59 11:11:01 1 21.90 0.00 3.08 1.50 0.12 73.40 11:13:59 11:11:01 2 29.92 0.00 3.87 0.54 0.12 65.56 11:13:59 11:11:01 3 19.02 0.00 3.50 1.04 0.08 76.36 11:13:59 11:11:01 4 34.67 0.00 5.11 7.62 0.12 52.48 11:13:59 11:11:01 5 35.54 0.00 5.31 1.95 0.10 57.09 11:13:59 11:11:01 6 30.03 0.00 4.21 0.66 0.12 64.98 11:13:59 11:11:01 7 29.93 0.00 4.32 2.00 0.10 63.64 11:13:59 11:12:01 all 4.11 0.00 0.43 0.94 0.05 94.46 11:13:59 11:12:01 0 3.36 0.00 0.38 0.08 0.05 96.13 11:13:59 11:12:01 1 4.27 0.00 0.34 0.05 0.05 95.30 11:13:59 11:12:01 2 5.01 0.00 0.38 0.12 0.03 94.45 11:13:59 11:12:01 3 2.76 0.00 0.33 0.00 0.08 96.82 11:13:59 11:12:01 4 5.48 0.00 0.62 7.22 0.05 86.64 11:13:59 11:12:01 5 3.78 0.00 0.43 0.03 0.03 95.72 11:13:59 11:12:01 6 3.96 0.00 0.50 0.00 0.05 95.49 11:13:59 11:12:01 7 4.32 0.00 0.48 0.05 0.03 95.11 11:13:59 11:13:01 all 1.37 0.00 0.35 1.06 0.05 97.18 11:13:59 11:13:01 0 1.08 0.00 0.45 0.02 0.03 98.41 11:13:59 11:13:01 1 1.68 0.00 0.27 0.00 0.05 98.01 11:13:59 11:13:01 2 1.02 0.00 0.37 0.10 0.05 98.46 11:13:59 11:13:01 3 1.55 0.00 0.53 0.02 0.07 97.83 11:13:59 11:13:01 4 0.85 0.00 0.28 7.87 0.05 90.94 11:13:59 11:13:01 5 1.74 0.00 0.25 0.02 0.05 97.94 11:13:59 11:13:01 6 0.87 0.00 0.22 0.02 0.02 98.88 11:13:59 11:13:01 7 2.17 0.00 0.37 0.43 0.05 96.97 11:13:59 Average: all 11.02 0.00 2.27 3.18 0.06 83.47 11:13:59 Average: 0 9.06 0.00 2.25 0.49 0.05 88.14 11:13:59 Average: 1 7.36 0.00 2.01 1.52 0.06 89.05 11:13:59 Average: 2 9.58 0.00 2.14 0.63 0.05 87.59 11:13:59 Average: 3 10.89 0.00 2.23 1.42 0.06 85.39 11:13:59 Average: 4 13.76 0.00 2.34 11.58 0.07 72.25 11:13:59 Average: 5 13.24 0.00 2.61 1.21 0.06 82.88 11:13:59 Average: 6 13.43 0.00 2.22 2.19 0.06 82.09 11:13:59 Average: 7 10.82 0.00 2.38 6.38 0.06 80.37 11:13:59 11:13:59 11:13:59