12:33:50 Started by upstream project "policy-pap-master-merge-java" build number 350 12:33:50 originally caused by: 12:33:50 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137752 12:33:50 Running as SYSTEM 12:33:50 [EnvInject] - Loading node environment variables. 12:33:50 Building remotely on prd-ubuntu1804-docker-8c-8g-26122 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 12:33:50 [ssh-agent] Looking for ssh-agent implementation... 12:33:50 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 12:33:50 $ ssh-agent 12:33:50 SSH_AUTH_SOCK=/tmp/ssh-dhttlsfCYS4a/agent.2078 12:33:50 SSH_AGENT_PID=2080 12:33:50 [ssh-agent] Started. 12:33:50 Running ssh-add (command line suppressed) 12:33:50 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9210417928353453994.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9210417928353453994.key) 12:33:50 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 12:33:50 The recommended git tool is: NONE 12:33:52 using credential onap-jenkins-ssh 12:33:52 Wiping out workspace first. 12:33:52 Cloning the remote Git repository 12:33:52 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 12:33:52 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 12:33:52 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 12:33:52 > git --version # timeout=10 12:33:52 > git --version # 'git version 2.17.1' 12:33:52 using GIT_SSH to set credentials Gerrit user 12:33:52 Verifying host key using manually-configured host key entries 12:33:52 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 12:33:53 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 12:33:53 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 12:33:53 Avoid second fetch 12:33:53 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 12:33:53 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) 12:33:53 > git config core.sparsecheckout # timeout=10 12:33:53 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 12:33:54 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" 12:33:54 > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 12:33:54 provisioning config files... 12:33:54 copy managed file [npmrc] to file:/home/jenkins/.npmrc 12:33:54 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 12:33:54 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1083621391599179200.sh 12:33:54 ---> python-tools-install.sh 12:33:54 Setup pyenv: 12:33:54 * system (set by /opt/pyenv/version) 12:33:54 * 3.8.13 (set by /opt/pyenv/version) 12:33:54 * 3.9.13 (set by /opt/pyenv/version) 12:33:54 * 3.10.6 (set by /opt/pyenv/version) 12:33:59 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-saub 12:33:59 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 12:34:03 lf-activate-venv(): INFO: Installing: lftools 12:34:47 lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH 12:34:47 Generating Requirements File 12:35:33 Python 3.10.6 12:35:33 pip 24.0 from /tmp/venv-saub/lib/python3.10/site-packages/pip (python 3.10) 12:35:34 appdirs==1.4.4 12:35:34 argcomplete==3.3.0 12:35:34 aspy.yaml==1.3.0 12:35:34 attrs==23.2.0 12:35:34 autopage==0.5.2 12:35:34 beautifulsoup4==4.12.3 12:35:34 boto3==1.34.91 12:35:34 botocore==1.34.91 12:35:34 bs4==0.0.2 12:35:34 cachetools==5.3.3 12:35:34 certifi==2024.2.2 12:35:34 cffi==1.16.0 12:35:34 cfgv==3.4.0 12:35:34 chardet==5.2.0 12:35:34 charset-normalizer==3.3.2 12:35:34 click==8.1.7 12:35:34 cliff==4.6.0 12:35:34 cmd2==2.4.3 12:35:34 cryptography==3.3.2 12:35:34 debtcollector==3.0.0 12:35:34 decorator==5.1.1 12:35:34 defusedxml==0.7.1 12:35:34 Deprecated==1.2.14 12:35:34 distlib==0.3.8 12:35:34 dnspython==2.6.1 12:35:34 docker==4.2.2 12:35:34 dogpile.cache==1.3.2 12:35:34 email_validator==2.1.1 12:35:34 filelock==3.13.4 12:35:34 future==1.0.0 12:35:34 gitdb==4.0.11 12:35:34 GitPython==3.1.43 12:35:34 google-auth==2.29.0 12:35:34 httplib2==0.22.0 12:35:34 identify==2.5.36 12:35:34 idna==3.7 12:35:34 importlib-resources==1.5.0 12:35:34 iso8601==2.1.0 12:35:34 Jinja2==3.1.3 12:35:34 jmespath==1.0.1 12:35:34 jsonpatch==1.33 12:35:34 jsonpointer==2.4 12:35:34 jsonschema==4.21.1 12:35:34 jsonschema-specifications==2023.12.1 12:35:34 keystoneauth1==5.6.0 12:35:34 kubernetes==29.0.0 12:35:34 lftools==0.37.10 12:35:34 lxml==5.2.1 12:35:34 MarkupSafe==2.1.5 12:35:34 msgpack==1.0.8 12:35:34 multi_key_dict==2.0.3 12:35:34 munch==4.0.0 12:35:34 netaddr==1.2.1 12:35:34 netifaces==0.11.0 12:35:34 niet==1.4.2 12:35:34 nodeenv==1.8.0 12:35:34 oauth2client==4.1.3 12:35:34 oauthlib==3.2.2 12:35:34 openstacksdk==3.1.0 12:35:34 os-client-config==2.1.0 12:35:34 os-service-types==1.7.0 12:35:34 osc-lib==3.0.1 12:35:34 oslo.config==9.4.0 12:35:34 oslo.context==5.5.0 12:35:34 oslo.i18n==6.3.0 12:35:34 oslo.log==5.5.1 12:35:34 oslo.serialization==5.4.0 12:35:34 oslo.utils==7.1.0 12:35:34 packaging==24.0 12:35:34 pbr==6.0.0 12:35:34 platformdirs==4.2.1 12:35:34 prettytable==3.10.0 12:35:34 pyasn1==0.6.0 12:35:34 pyasn1_modules==0.4.0 12:35:34 pycparser==2.22 12:35:34 pygerrit2==2.0.15 12:35:34 PyGithub==2.3.0 12:35:34 pyinotify==0.9.6 12:35:34 PyJWT==2.8.0 12:35:34 PyNaCl==1.5.0 12:35:34 pyparsing==2.4.7 12:35:34 pyperclip==1.8.2 12:35:34 pyrsistent==0.20.0 12:35:34 python-cinderclient==9.5.0 12:35:34 python-dateutil==2.9.0.post0 12:35:34 python-heatclient==3.5.0 12:35:34 python-jenkins==1.8.2 12:35:34 python-keystoneclient==5.4.0 12:35:34 python-magnumclient==4.4.0 12:35:34 python-novaclient==18.6.0 12:35:34 python-openstackclient==6.6.0 12:35:34 python-swiftclient==4.5.0 12:35:34 PyYAML==6.0.1 12:35:34 referencing==0.35.0 12:35:34 requests==2.31.0 12:35:34 requests-oauthlib==2.0.0 12:35:34 requestsexceptions==1.4.0 12:35:34 rfc3986==2.0.0 12:35:34 rpds-py==0.18.0 12:35:34 rsa==4.9 12:35:34 ruamel.yaml==0.18.6 12:35:34 ruamel.yaml.clib==0.2.8 12:35:34 s3transfer==0.10.1 12:35:34 simplejson==3.19.2 12:35:34 six==1.16.0 12:35:34 smmap==5.0.1 12:35:34 soupsieve==2.5 12:35:34 stevedore==5.2.0 12:35:34 tabulate==0.9.0 12:35:34 toml==0.10.2 12:35:34 tomlkit==0.12.4 12:35:34 tqdm==4.66.2 12:35:34 typing_extensions==4.11.0 12:35:34 tzdata==2024.1 12:35:34 urllib3==1.26.18 12:35:34 virtualenv==20.26.0 12:35:34 wcwidth==0.2.13 12:35:34 websocket-client==1.8.0 12:35:34 wrapt==1.16.0 12:35:34 xdg==6.0.0 12:35:34 xmltodict==0.13.0 12:35:34 yq==3.4.1 12:35:34 [EnvInject] - Injecting environment variables from a build step. 12:35:34 [EnvInject] - Injecting as environment variables the properties content 12:35:34 SET_JDK_VERSION=openjdk17 12:35:34 GIT_URL="git://cloud.onap.org/mirror" 12:35:34 12:35:34 [EnvInject] - Variables injected successfully. 12:35:34 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins8786289027505557310.sh 12:35:34 ---> update-java-alternatives.sh 12:35:34 ---> Updating Java version 12:35:34 ---> Ubuntu/Debian system detected 12:35:34 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 12:35:34 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 12:35:35 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 12:35:35 openjdk version "17.0.4" 2022-07-19 12:35:35 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 12:35:35 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 12:35:35 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 12:35:35 [EnvInject] - Injecting environment variables from a build step. 12:35:35 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 12:35:35 [EnvInject] - Variables injected successfully. 12:35:35 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins7012746460804729830.sh 12:35:35 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 12:35:35 + set +u 12:35:35 + save_set 12:35:35 + RUN_CSIT_SAVE_SET=ehxB 12:35:35 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 12:35:35 + '[' 1 -eq 0 ']' 12:35:35 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 12:35:35 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:35:35 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:35:35 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 12:35:35 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 12:35:35 + export ROBOT_VARIABLES= 12:35:35 + ROBOT_VARIABLES= 12:35:35 + export PROJECT=pap 12:35:35 + PROJECT=pap 12:35:35 + cd /w/workspace/policy-pap-master-project-csit-pap 12:35:35 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 12:35:35 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 12:35:35 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 12:35:35 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 12:35:35 + relax_set 12:35:35 + set +e 12:35:35 + set +o pipefail 12:35:35 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 12:35:35 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 12:35:35 +++ mktemp -d 12:35:35 ++ ROBOT_VENV=/tmp/tmp.Pg6Bn3f7f8 12:35:35 ++ echo ROBOT_VENV=/tmp/tmp.Pg6Bn3f7f8 12:35:35 +++ python3 --version 12:35:35 ++ echo 'Python version is: Python 3.6.9' 12:35:35 Python version is: Python 3.6.9 12:35:35 ++ python3 -m venv --clear /tmp/tmp.Pg6Bn3f7f8 12:35:37 ++ source /tmp/tmp.Pg6Bn3f7f8/bin/activate 12:35:37 +++ deactivate nondestructive 12:35:37 +++ '[' -n '' ']' 12:35:37 +++ '[' -n '' ']' 12:35:37 +++ '[' -n /bin/bash -o -n '' ']' 12:35:37 +++ hash -r 12:35:37 +++ '[' -n '' ']' 12:35:37 +++ unset VIRTUAL_ENV 12:35:37 +++ '[' '!' nondestructive = nondestructive ']' 12:35:37 +++ VIRTUAL_ENV=/tmp/tmp.Pg6Bn3f7f8 12:35:37 +++ export VIRTUAL_ENV 12:35:37 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:35:37 +++ PATH=/tmp/tmp.Pg6Bn3f7f8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:35:37 +++ export PATH 12:35:37 +++ '[' -n '' ']' 12:35:37 +++ '[' -z '' ']' 12:35:37 +++ _OLD_VIRTUAL_PS1= 12:35:37 +++ '[' 'x(tmp.Pg6Bn3f7f8) ' '!=' x ']' 12:35:37 +++ PS1='(tmp.Pg6Bn3f7f8) ' 12:35:37 +++ export PS1 12:35:37 +++ '[' -n /bin/bash -o -n '' ']' 12:35:37 +++ hash -r 12:35:37 ++ set -exu 12:35:37 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 12:35:40 ++ echo 'Installing Python Requirements' 12:35:40 Installing Python Requirements 12:35:40 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 12:36:17 ++ python3 -m pip -qq freeze 12:36:17 bcrypt==4.0.1 12:36:17 beautifulsoup4==4.12.3 12:36:17 bitarray==2.9.2 12:36:17 certifi==2024.2.2 12:36:17 cffi==1.15.1 12:36:17 charset-normalizer==2.0.12 12:36:17 cryptography==40.0.2 12:36:17 decorator==5.1.1 12:36:17 elasticsearch==7.17.9 12:36:17 elasticsearch-dsl==7.4.1 12:36:17 enum34==1.1.10 12:36:17 idna==3.7 12:36:17 importlib-resources==5.4.0 12:36:17 ipaddr==2.2.0 12:36:17 isodate==0.6.1 12:36:17 jmespath==0.10.0 12:36:17 jsonpatch==1.32 12:36:17 jsonpath-rw==1.4.0 12:36:17 jsonpointer==2.3 12:36:17 lxml==5.2.1 12:36:17 netaddr==0.8.0 12:36:17 netifaces==0.11.0 12:36:17 odltools==0.1.28 12:36:17 paramiko==3.4.0 12:36:17 pkg_resources==0.0.0 12:36:17 ply==3.11 12:36:17 pyang==2.6.0 12:36:17 pyangbind==0.8.1 12:36:17 pycparser==2.21 12:36:17 pyhocon==0.3.60 12:36:17 PyNaCl==1.5.0 12:36:17 pyparsing==3.1.2 12:36:17 python-dateutil==2.9.0.post0 12:36:17 regex==2023.8.8 12:36:17 requests==2.27.1 12:36:17 robotframework==6.1.1 12:36:17 robotframework-httplibrary==0.4.2 12:36:17 robotframework-pythonlibcore==3.0.0 12:36:17 robotframework-requests==0.9.4 12:36:17 robotframework-selenium2library==3.0.0 12:36:17 robotframework-seleniumlibrary==5.1.3 12:36:17 robotframework-sshlibrary==3.8.0 12:36:17 scapy==2.5.0 12:36:17 scp==0.14.5 12:36:17 selenium==3.141.0 12:36:17 six==1.16.0 12:36:17 soupsieve==2.3.2.post1 12:36:17 urllib3==1.26.18 12:36:17 waitress==2.0.0 12:36:17 WebOb==1.8.7 12:36:17 WebTest==3.0.0 12:36:17 zipp==3.6.0 12:36:17 ++ mkdir -p /tmp/tmp.Pg6Bn3f7f8/src/onap 12:36:17 ++ rm -rf /tmp/tmp.Pg6Bn3f7f8/src/onap/testsuite 12:36:17 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 12:36:35 ++ echo 'Installing python confluent-kafka library' 12:36:35 Installing python confluent-kafka library 12:36:35 ++ python3 -m pip install -qq confluent-kafka 12:36:36 ++ echo 'Uninstall docker-py and reinstall docker.' 12:36:36 Uninstall docker-py and reinstall docker. 12:36:36 ++ python3 -m pip uninstall -y -qq docker 12:36:36 ++ python3 -m pip install -U -qq docker 12:36:38 ++ python3 -m pip -qq freeze 12:36:38 bcrypt==4.0.1 12:36:38 beautifulsoup4==4.12.3 12:36:38 bitarray==2.9.2 12:36:38 certifi==2024.2.2 12:36:38 cffi==1.15.1 12:36:38 charset-normalizer==2.0.12 12:36:38 confluent-kafka==2.3.0 12:36:38 cryptography==40.0.2 12:36:38 decorator==5.1.1 12:36:38 deepdiff==5.7.0 12:36:38 dnspython==2.2.1 12:36:38 docker==5.0.3 12:36:38 elasticsearch==7.17.9 12:36:38 elasticsearch-dsl==7.4.1 12:36:38 enum34==1.1.10 12:36:38 future==1.0.0 12:36:38 idna==3.7 12:36:38 importlib-resources==5.4.0 12:36:38 ipaddr==2.2.0 12:36:38 isodate==0.6.1 12:36:38 Jinja2==3.0.3 12:36:38 jmespath==0.10.0 12:36:38 jsonpatch==1.32 12:36:38 jsonpath-rw==1.4.0 12:36:38 jsonpointer==2.3 12:36:38 kafka-python==2.0.2 12:36:38 lxml==5.2.1 12:36:38 MarkupSafe==2.0.1 12:36:38 more-itertools==5.0.0 12:36:38 netaddr==0.8.0 12:36:38 netifaces==0.11.0 12:36:38 odltools==0.1.28 12:36:38 ordered-set==4.0.2 12:36:38 paramiko==3.4.0 12:36:38 pbr==6.0.0 12:36:38 pkg_resources==0.0.0 12:36:38 ply==3.11 12:36:38 protobuf==3.19.6 12:36:38 pyang==2.6.0 12:36:38 pyangbind==0.8.1 12:36:38 pycparser==2.21 12:36:38 pyhocon==0.3.60 12:36:38 PyNaCl==1.5.0 12:36:38 pyparsing==3.1.2 12:36:38 python-dateutil==2.9.0.post0 12:36:38 PyYAML==6.0.1 12:36:38 regex==2023.8.8 12:36:38 requests==2.27.1 12:36:38 robotframework==6.1.1 12:36:38 robotframework-httplibrary==0.4.2 12:36:38 robotframework-onap==0.6.0.dev105 12:36:38 robotframework-pythonlibcore==3.0.0 12:36:38 robotframework-requests==0.9.4 12:36:38 robotframework-selenium2library==3.0.0 12:36:38 robotframework-seleniumlibrary==5.1.3 12:36:38 robotframework-sshlibrary==3.8.0 12:36:38 robotlibcore-temp==1.0.2 12:36:38 scapy==2.5.0 12:36:38 scp==0.14.5 12:36:38 selenium==3.141.0 12:36:38 six==1.16.0 12:36:38 soupsieve==2.3.2.post1 12:36:38 urllib3==1.26.18 12:36:38 waitress==2.0.0 12:36:38 WebOb==1.8.7 12:36:38 websocket-client==1.3.1 12:36:38 WebTest==3.0.0 12:36:38 zipp==3.6.0 12:36:38 ++ uname 12:36:38 ++ grep -q Linux 12:36:38 ++ sudo apt-get -y -qq install libxml2-utils 12:36:38 + load_set 12:36:38 + _setopts=ehuxB 12:36:38 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 12:36:38 ++ tr : ' ' 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o braceexpand 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o hashall 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o interactive-comments 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o nounset 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o xtrace 12:36:38 ++ echo ehuxB 12:36:38 ++ sed 's/./& /g' 12:36:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:36:38 + set +e 12:36:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:36:38 + set +h 12:36:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:36:38 + set +u 12:36:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:36:38 + set +x 12:36:38 + source_safely /tmp/tmp.Pg6Bn3f7f8/bin/activate 12:36:38 + '[' -z /tmp/tmp.Pg6Bn3f7f8/bin/activate ']' 12:36:38 + relax_set 12:36:38 + set +e 12:36:38 + set +o pipefail 12:36:38 + . /tmp/tmp.Pg6Bn3f7f8/bin/activate 12:36:38 ++ deactivate nondestructive 12:36:38 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 12:36:38 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:36:38 ++ export PATH 12:36:38 ++ unset _OLD_VIRTUAL_PATH 12:36:38 ++ '[' -n '' ']' 12:36:38 ++ '[' -n /bin/bash -o -n '' ']' 12:36:38 ++ hash -r 12:36:38 ++ '[' -n '' ']' 12:36:38 ++ unset VIRTUAL_ENV 12:36:38 ++ '[' '!' nondestructive = nondestructive ']' 12:36:38 ++ VIRTUAL_ENV=/tmp/tmp.Pg6Bn3f7f8 12:36:38 ++ export VIRTUAL_ENV 12:36:38 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:36:38 ++ PATH=/tmp/tmp.Pg6Bn3f7f8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 12:36:38 ++ export PATH 12:36:38 ++ '[' -n '' ']' 12:36:38 ++ '[' -z '' ']' 12:36:38 ++ _OLD_VIRTUAL_PS1='(tmp.Pg6Bn3f7f8) ' 12:36:38 ++ '[' 'x(tmp.Pg6Bn3f7f8) ' '!=' x ']' 12:36:38 ++ PS1='(tmp.Pg6Bn3f7f8) (tmp.Pg6Bn3f7f8) ' 12:36:38 ++ export PS1 12:36:38 ++ '[' -n /bin/bash -o -n '' ']' 12:36:38 ++ hash -r 12:36:38 + load_set 12:36:38 + _setopts=hxB 12:36:38 ++ echo braceexpand:hashall:interactive-comments:xtrace 12:36:38 ++ tr : ' ' 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o braceexpand 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o hashall 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o interactive-comments 12:36:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:36:38 + set +o xtrace 12:36:38 ++ echo hxB 12:36:38 ++ sed 's/./& /g' 12:36:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:36:38 + set +h 12:36:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:36:38 + set +x 12:36:38 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 12:36:38 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 12:36:38 + export TEST_OPTIONS= 12:36:38 + TEST_OPTIONS= 12:36:38 ++ mktemp -d 12:36:38 + WORKDIR=/tmp/tmp.IfKGrR3aFZ 12:36:38 + cd /tmp/tmp.IfKGrR3aFZ 12:36:38 + docker login -u docker -p docker nexus3.onap.org:10001 12:36:40 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 12:36:40 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 12:36:40 Configure a credential helper to remove this warning. See 12:36:40 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 12:36:40 12:36:40 Login Succeeded 12:36:40 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 12:36:40 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 12:36:40 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 12:36:40 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 12:36:40 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 12:36:40 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 12:36:40 + relax_set 12:36:40 + set +e 12:36:40 + set +o pipefail 12:36:40 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 12:36:40 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 12:36:40 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 12:36:40 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 12:36:40 +++ GERRIT_BRANCH=master 12:36:40 +++ echo GERRIT_BRANCH=master 12:36:40 GERRIT_BRANCH=master 12:36:40 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 12:36:40 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 12:36:40 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 12:36:40 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 12:36:41 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 12:36:41 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 12:36:41 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 12:36:41 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 12:36:41 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 12:36:41 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 12:36:41 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 12:36:41 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 12:36:41 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 12:36:41 +++ grafana=false 12:36:41 +++ gui=false 12:36:41 +++ [[ 2 -gt 0 ]] 12:36:41 +++ key=apex-pdp 12:36:41 +++ case $key in 12:36:41 +++ echo apex-pdp 12:36:41 apex-pdp 12:36:41 +++ component=apex-pdp 12:36:41 +++ shift 12:36:41 +++ [[ 1 -gt 0 ]] 12:36:41 +++ key=--grafana 12:36:41 +++ case $key in 12:36:41 +++ grafana=true 12:36:41 +++ shift 12:36:41 +++ [[ 0 -gt 0 ]] 12:36:41 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 12:36:41 +++ echo 'Configuring docker compose...' 12:36:41 Configuring docker compose... 12:36:41 +++ source export-ports.sh 12:36:41 +++ source get-versions.sh 12:36:44 +++ '[' -z pap ']' 12:36:44 +++ '[' -n apex-pdp ']' 12:36:44 +++ '[' apex-pdp == logs ']' 12:36:44 +++ '[' true = true ']' 12:36:44 +++ echo 'Starting apex-pdp application with Grafana' 12:36:44 Starting apex-pdp application with Grafana 12:36:44 +++ docker-compose up -d apex-pdp grafana 12:36:45 Creating network "compose_default" with the default driver 12:36:45 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 12:36:48 latest: Pulling from prom/prometheus 12:36:55 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 12:36:55 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 12:36:55 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 12:36:55 latest: Pulling from grafana/grafana 12:37:01 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 12:37:01 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 12:37:01 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 12:37:01 10.10.2: Pulling from mariadb 12:37:08 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 12:37:08 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 12:37:08 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 12:37:11 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 12:37:16 Digest: sha256:8c393534de923b51cd2c2937210a65f4f06f457c0dff40569dd547e5429385c8 12:37:16 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 12:37:16 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 12:37:18 latest: Pulling from confluentinc/cp-zookeeper 12:38:25 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 12:38:25 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 12:38:25 Pulling kafka (confluentinc/cp-kafka:latest)... 12:38:25 latest: Pulling from confluentinc/cp-kafka 12:38:29 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 12:38:29 Status: Downloaded newer image for confluentinc/cp-kafka:latest 12:38:29 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 12:38:29 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 12:38:33 Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 12:38:33 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 12:38:33 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 12:38:33 3.1.2-SNAPSHOT: Pulling from onap/policy-api 12:38:34 Digest: sha256:a3b0738a5c3612fb51928bf2c6d20b8feb39bdb05a9ed3daffb9977a144bacf6 12:38:34 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 12:38:34 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 12:38:34 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 12:38:37 Digest: sha256:a268743829cd0409cbb5d4678d69b9f5d14d1499e307454e509124b67f361bc4 12:38:37 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 12:38:37 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 12:38:37 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 12:38:45 Digest: sha256:75a74a87b7345e553563fbe2ececcd2285ed9500fd91489d9968ae81123b9982 12:38:45 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 12:38:45 Creating zookeeper ... 12:38:45 Creating prometheus ... 12:38:45 Creating mariadb ... 12:38:45 Creating simulator ... 12:39:13 Creating mariadb ... done 12:39:13 Creating policy-db-migrator ... 12:39:14 Creating policy-db-migrator ... done 12:39:14 Creating policy-api ... 12:39:15 Creating zookeeper ... done 12:39:15 Creating kafka ... 12:39:16 Creating prometheus ... done 12:39:16 Creating grafana ... 12:39:17 Creating kafka ... done 12:39:18 Creating policy-api ... done 12:39:18 Creating policy-pap ... 12:39:19 Creating policy-pap ... done 12:39:20 Creating simulator ... done 12:39:20 Creating policy-apex-pdp ... 12:39:21 Creating policy-apex-pdp ... done 12:39:22 Creating grafana ... done 12:39:23 +++ echo 'Prometheus server: http://localhost:30259' 12:39:23 Prometheus server: http://localhost:30259 12:39:23 +++ echo 'Grafana server: http://localhost:30269' 12:39:23 Grafana server: http://localhost:30269 12:39:23 +++ cd /w/workspace/policy-pap-master-project-csit-pap 12:39:23 ++ sleep 10 12:39:33 ++ unset http_proxy https_proxy 12:39:33 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 12:39:33 Waiting for REST to come up on localhost port 30003... 12:39:33 NAMES STATUS 12:39:33 policy-apex-pdp Up 11 seconds 12:39:33 policy-pap Up 13 seconds 12:39:33 grafana Up 10 seconds 12:39:33 kafka Up 15 seconds 12:39:33 policy-api Up 14 seconds 12:39:33 policy-db-migrator Up 18 seconds 12:39:33 simulator Up 12 seconds 12:39:33 mariadb Up 19 seconds 12:39:33 prometheus Up 16 seconds 12:39:33 zookeeper Up 17 seconds 12:39:38 NAMES STATUS 12:39:38 policy-apex-pdp Up 16 seconds 12:39:38 policy-pap Up 18 seconds 12:39:38 grafana Up 15 seconds 12:39:38 kafka Up 20 seconds 12:39:38 policy-api Up 19 seconds 12:39:38 simulator Up 17 seconds 12:39:38 mariadb Up 24 seconds 12:39:38 prometheus Up 21 seconds 12:39:38 zookeeper Up 22 seconds 12:39:43 NAMES STATUS 12:39:43 policy-apex-pdp Up 21 seconds 12:39:43 policy-pap Up 23 seconds 12:39:43 grafana Up 20 seconds 12:39:43 kafka Up 25 seconds 12:39:43 policy-api Up 24 seconds 12:39:43 simulator Up 22 seconds 12:39:43 mariadb Up 29 seconds 12:39:43 prometheus Up 26 seconds 12:39:43 zookeeper Up 27 seconds 12:39:48 NAMES STATUS 12:39:48 policy-apex-pdp Up 26 seconds 12:39:48 policy-pap Up 28 seconds 12:39:48 grafana Up 25 seconds 12:39:48 kafka Up 30 seconds 12:39:48 policy-api Up 29 seconds 12:39:48 simulator Up 27 seconds 12:39:48 mariadb Up 34 seconds 12:39:48 prometheus Up 31 seconds 12:39:48 zookeeper Up 32 seconds 12:39:53 NAMES STATUS 12:39:53 policy-apex-pdp Up 31 seconds 12:39:53 policy-pap Up 33 seconds 12:39:53 grafana Up 30 seconds 12:39:53 kafka Up 35 seconds 12:39:53 policy-api Up 34 seconds 12:39:53 simulator Up 32 seconds 12:39:53 mariadb Up 39 seconds 12:39:53 prometheus Up 36 seconds 12:39:53 zookeeper Up 37 seconds 12:39:58 NAMES STATUS 12:39:58 policy-apex-pdp Up 36 seconds 12:39:58 policy-pap Up 38 seconds 12:39:58 grafana Up 35 seconds 12:39:58 kafka Up 40 seconds 12:39:58 policy-api Up 39 seconds 12:39:58 simulator Up 37 seconds 12:39:58 mariadb Up 44 seconds 12:39:58 prometheus Up 41 seconds 12:39:58 zookeeper Up 42 seconds 12:40:03 NAMES STATUS 12:40:03 policy-apex-pdp Up 41 seconds 12:40:03 policy-pap Up 43 seconds 12:40:03 grafana Up 40 seconds 12:40:03 kafka Up 45 seconds 12:40:03 policy-api Up 44 seconds 12:40:03 simulator Up 42 seconds 12:40:03 mariadb Up 49 seconds 12:40:03 prometheus Up 46 seconds 12:40:03 zookeeper Up 47 seconds 12:40:03 ++ export 'SUITES=pap-test.robot 12:40:03 pap-slas.robot' 12:40:03 ++ SUITES='pap-test.robot 12:40:03 pap-slas.robot' 12:40:03 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 12:40:03 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 12:40:03 + load_set 12:40:03 + _setopts=hxB 12:40:03 ++ echo braceexpand:hashall:interactive-comments:xtrace 12:40:03 ++ tr : ' ' 12:40:03 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:40:03 + set +o braceexpand 12:40:03 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:40:03 + set +o hashall 12:40:03 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:40:03 + set +o interactive-comments 12:40:03 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:40:03 + set +o xtrace 12:40:03 ++ echo hxB 12:40:03 ++ sed 's/./& /g' 12:40:03 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:40:03 + set +h 12:40:03 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:40:03 + set +x 12:40:03 + docker_stats 12:40:03 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 12:40:03 ++ uname -s 12:40:03 + '[' Linux == Darwin ']' 12:40:03 + sh -c 'top -bn1 | head -3' 12:40:03 top - 12:40:03 up 7 min, 0 users, load average: 4.06, 2.34, 1.04 12:40:03 Tasks: 204 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 12:40:03 %Cpu(s): 8.6 us, 1.7 sy, 0.0 ni, 81.1 id, 8.5 wa, 0.0 hi, 0.0 si, 0.0 st 12:40:03 + echo 12:40:03 12:40:03 + sh -c 'free -h' 12:40:03 total used free shared buff/cache available 12:40:03 Mem: 31G 2.8G 22G 1.3M 6.0G 28G 12:40:03 Swap: 1.0G 0B 1.0G 12:40:03 + echo 12:40:03 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 12:40:03 12:40:03 NAMES STATUS 12:40:03 policy-apex-pdp Up 42 seconds 12:40:03 policy-pap Up 44 seconds 12:40:03 grafana Up 41 seconds 12:40:03 kafka Up 46 seconds 12:40:03 policy-api Up 45 seconds 12:40:03 simulator Up 43 seconds 12:40:03 mariadb Up 50 seconds 12:40:03 prometheus Up 47 seconds 12:40:03 zookeeper Up 48 seconds 12:40:03 + echo 12:40:03 + docker stats --no-stream 12:40:03 12:40:06 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 12:40:06 9f10aca361cb policy-apex-pdp 10.13% 185.8MiB / 31.41GiB 0.58% 19.8kB / 23.9kB 0B / 0B 47 12:40:06 43cc711f1c86 policy-pap 14.55% 572.5MiB / 31.41GiB 1.78% 41.6kB / 46.5kB 0B / 149MB 60 12:40:06 a99a722b04f1 grafana 0.03% 57.54MiB / 31.41GiB 0.18% 18.2kB / 3.38kB 0B / 25MB 16 12:40:06 7b59f0a67712 kafka 37.00% 363.8MiB / 31.41GiB 1.13% 115kB / 118kB 0B / 127kB 83 12:40:06 9e79fdfe83bd policy-api 0.20% 496.1MiB / 31.41GiB 1.54% 990kB / 647kB 0B / 0B 54 12:40:06 7e5cd61d6e18 simulator 0.08% 120MiB / 31.41GiB 0.37% 1.23kB / 0B 0B / 0B 76 12:40:06 89c2db910687 mariadb 0.02% 102MiB / 31.41GiB 0.32% 935kB / 1.18MB 11.1MB / 68.1MB 37 12:40:06 9ef6b7b16ac4 prometheus 0.00% 18.91MiB / 31.41GiB 0.06% 1.6kB / 474B 0B / 0B 13 12:40:06 070cd7ecdd50 zookeeper 20.10% 101MiB / 31.41GiB 0.31% 81.2kB / 67.2kB 0B / 336kB 60 12:40:06 + echo 12:40:06 12:40:06 + cd /tmp/tmp.IfKGrR3aFZ 12:40:06 + echo 'Reading the testplan:' 12:40:06 Reading the testplan: 12:40:06 + echo 'pap-test.robot 12:40:06 pap-slas.robot' 12:40:06 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 12:40:06 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 12:40:06 + cat testplan.txt 12:40:06 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 12:40:06 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 12:40:06 ++ xargs 12:40:06 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 12:40:06 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 12:40:06 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 12:40:06 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 12:40:06 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 12:40:06 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 12:40:06 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 12:40:06 + relax_set 12:40:06 + set +e 12:40:06 + set +o pipefail 12:40:06 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 12:40:06 ============================================================================== 12:40:06 pap 12:40:06 ============================================================================== 12:40:06 pap.Pap-Test 12:40:06 ============================================================================== 12:40:07 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 12:40:07 ------------------------------------------------------------------------------ 12:40:08 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 12:40:08 ------------------------------------------------------------------------------ 12:40:11 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 12:40:11 ------------------------------------------------------------------------------ 12:40:11 Healthcheck :: Verify policy pap health check | PASS | 12:40:11 ------------------------------------------------------------------------------ 12:40:32 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 12:40:32 ------------------------------------------------------------------------------ 12:40:32 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 12:40:32 ------------------------------------------------------------------------------ 12:40:33 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 12:40:33 ------------------------------------------------------------------------------ 12:40:33 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 12:40:33 ------------------------------------------------------------------------------ 12:40:33 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 12:40:33 ------------------------------------------------------------------------------ 12:40:33 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 12:40:33 ------------------------------------------------------------------------------ 12:40:34 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 12:40:34 ------------------------------------------------------------------------------ 12:40:34 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 12:40:34 ------------------------------------------------------------------------------ 12:40:34 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 12:40:34 ------------------------------------------------------------------------------ 12:40:34 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 12:40:34 ------------------------------------------------------------------------------ 12:40:34 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 12:40:34 ------------------------------------------------------------------------------ 12:40:35 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 12:40:35 ------------------------------------------------------------------------------ 12:40:35 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 12:40:35 ------------------------------------------------------------------------------ 12:40:55 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | 12:40:55 pdpTypeC != pdpTypeA 12:40:55 ------------------------------------------------------------------------------ 12:40:55 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 12:40:55 ------------------------------------------------------------------------------ 12:40:55 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 12:40:55 ------------------------------------------------------------------------------ 12:40:56 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 12:40:56 ------------------------------------------------------------------------------ 12:40:56 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 12:40:56 ------------------------------------------------------------------------------ 12:40:56 pap.Pap-Test | FAIL | 12:40:56 22 tests, 21 passed, 1 failed 12:40:56 ============================================================================== 12:40:56 pap.Pap-Slas 12:40:56 ============================================================================== 12:41:56 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 12:41:56 ------------------------------------------------------------------------------ 12:41:56 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 12:41:56 ------------------------------------------------------------------------------ 12:41:56 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 12:41:56 ------------------------------------------------------------------------------ 12:41:56 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 12:41:56 ------------------------------------------------------------------------------ 12:41:56 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 12:41:56 ------------------------------------------------------------------------------ 12:41:56 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 12:41:56 ------------------------------------------------------------------------------ 12:41:56 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 12:41:56 ------------------------------------------------------------------------------ 12:41:56 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 12:41:56 ------------------------------------------------------------------------------ 12:41:56 pap.Pap-Slas | PASS | 12:41:56 8 tests, 8 passed, 0 failed 12:41:56 ============================================================================== 12:41:56 pap | FAIL | 12:41:56 30 tests, 29 passed, 1 failed 12:41:56 ============================================================================== 12:41:56 Output: /tmp/tmp.IfKGrR3aFZ/output.xml 12:41:56 Log: /tmp/tmp.IfKGrR3aFZ/log.html 12:41:56 Report: /tmp/tmp.IfKGrR3aFZ/report.html 12:41:56 + RESULT=1 12:41:56 + load_set 12:41:56 + _setopts=hxB 12:41:56 ++ tr : ' ' 12:41:56 ++ echo braceexpand:hashall:interactive-comments:xtrace 12:41:56 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:41:56 + set +o braceexpand 12:41:56 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:41:56 + set +o hashall 12:41:56 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:41:56 + set +o interactive-comments 12:41:56 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:41:56 + set +o xtrace 12:41:56 ++ echo hxB 12:41:56 ++ sed 's/./& /g' 12:41:56 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:41:56 + set +h 12:41:56 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:41:56 + set +x 12:41:56 + echo 'RESULT: 1' 12:41:56 RESULT: 1 12:41:56 + exit 1 12:41:56 + on_exit 12:41:56 + rc=1 12:41:56 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 12:41:56 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 12:41:56 NAMES STATUS 12:41:56 policy-apex-pdp Up 2 minutes 12:41:56 policy-pap Up 2 minutes 12:41:56 grafana Up 2 minutes 12:41:56 kafka Up 2 minutes 12:41:56 policy-api Up 2 minutes 12:41:56 simulator Up 2 minutes 12:41:56 mariadb Up 2 minutes 12:41:56 prometheus Up 2 minutes 12:41:56 zookeeper Up 2 minutes 12:41:56 + docker_stats 12:41:56 ++ uname -s 12:41:56 + '[' Linux == Darwin ']' 12:41:56 + sh -c 'top -bn1 | head -3' 12:41:56 top - 12:41:56 up 9 min, 0 users, load average: 1.30, 1.97, 1.07 12:41:56 Tasks: 202 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 12:41:56 %Cpu(s): 7.8 us, 1.5 sy, 0.0 ni, 83.0 id, 7.6 wa, 0.0 hi, 0.0 si, 0.0 st 12:41:56 + echo 12:41:56 12:41:56 + sh -c 'free -h' 12:41:56 total used free shared buff/cache available 12:41:56 Mem: 31G 2.7G 22G 1.3M 6.0G 28G 12:41:56 Swap: 1.0G 0B 1.0G 12:41:56 + echo 12:41:56 12:41:56 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 12:41:56 NAMES STATUS 12:41:56 policy-apex-pdp Up 2 minutes 12:41:56 policy-pap Up 2 minutes 12:41:56 grafana Up 2 minutes 12:41:56 kafka Up 2 minutes 12:41:56 policy-api Up 2 minutes 12:41:56 simulator Up 2 minutes 12:41:56 mariadb Up 2 minutes 12:41:56 prometheus Up 2 minutes 12:41:56 zookeeper Up 2 minutes 12:41:56 + echo 12:41:56 12:41:56 + docker stats --no-stream 12:41:59 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 12:41:59 9f10aca361cb policy-apex-pdp 0.35% 189.9MiB / 31.41GiB 0.59% 124kB / 187kB 0B / 0B 52 12:41:59 43cc711f1c86 policy-pap 0.58% 505MiB / 31.41GiB 1.57% 2.55MB / 1.16MB 0B / 149MB 66 12:41:59 a99a722b04f1 grafana 0.04% 57.96MiB / 31.41GiB 0.18% 19.1kB / 4.45kB 0B / 25MB 16 12:41:59 7b59f0a67712 kafka 1.24% 394.1MiB / 31.41GiB 1.23% 683kB / 595kB 0B / 602kB 85 12:41:59 9e79fdfe83bd policy-api 0.14% 496.4MiB / 31.41GiB 1.54% 2.46MB / 1.1MB 0B / 0B 56 12:41:59 7e5cd61d6e18 simulator 0.07% 120.1MiB / 31.41GiB 0.37% 1.45kB / 0B 0B / 0B 78 12:41:59 89c2db910687 mariadb 0.02% 103.3MiB / 31.41GiB 0.32% 2.02MB / 4.88MB 11.1MB / 68.4MB 28 12:41:59 9ef6b7b16ac4 prometheus 0.00% 25.91MiB / 31.41GiB 0.08% 191kB / 10.9kB 0B / 0B 13 12:41:59 070cd7ecdd50 zookeeper 0.07% 103MiB / 31.41GiB 0.32% 297kB / 285kB 0B / 336kB 60 12:41:59 + echo 12:41:59 12:41:59 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 12:41:59 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 12:41:59 + relax_set 12:41:59 + set +e 12:41:59 + set +o pipefail 12:41:59 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 12:41:59 ++ echo 'Shut down started!' 12:41:59 Shut down started! 12:41:59 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 12:41:59 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 12:41:59 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 12:41:59 ++ source export-ports.sh 12:41:59 ++ source get-versions.sh 12:42:02 ++ echo 'Collecting logs from docker compose containers...' 12:42:02 Collecting logs from docker compose containers... 12:42:02 ++ docker-compose logs 12:42:04 ++ cat docker_compose.log 12:42:04 Attaching to policy-apex-pdp, policy-pap, grafana, kafka, policy-api, policy-db-migrator, simulator, mariadb, prometheus, zookeeper 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988052892Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-25T12:39:22Z 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988350016Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988364636Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988368426Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988371606Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988374686Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988379337Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988406227Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988413467Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988416787Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988424427Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988427957Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988431377Z level=info msg=Target target=[all] 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988439767Z level=info msg="Path Home" path=/usr/share/grafana 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988443047Z level=info msg="Path Data" path=/var/lib/grafana 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988446277Z level=info msg="Path Logs" path=/var/log/grafana 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988449657Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988452777Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 12:42:04 grafana | logger=settings t=2024-04-25T12:39:22.988456058Z level=info msg="App mode production" 12:42:04 grafana | logger=sqlstore t=2024-04-25T12:39:22.988844333Z level=info msg="Connecting to DB" dbtype=sqlite3 12:42:04 grafana | logger=sqlstore t=2024-04-25T12:39:22.988868013Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:22.989770036Z level=info msg="Starting DB migrations" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:22.991212625Z level=info msg="Executing migration" id="create migration_log table" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:22.992318341Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.106036ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:22.99973868Z level=info msg="Executing migration" id="create user table" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.000887547Z level=info msg="Migration successfully executed" id="create user table" duration=1.151237ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.008952723Z level=info msg="Executing migration" id="add unique index user.login" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.010209782Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.258019ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.018264238Z level=info msg="Executing migration" id="add unique index user.email" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.019006799Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=742.361µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.025802446Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.026953484Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.150948ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.031186909Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.032292037Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.105048ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.090860302Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.09459299Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.721178ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.101584519Z level=info msg="Executing migration" id="create user table v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.102452293Z level=info msg="Migration successfully executed" id="create user table v2" duration=867.284µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.108909534Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.110068403Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.158529ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.115336714Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.116669946Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.334462ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.12460363Z level=info msg="Executing migration" id="copy data_source v1 to v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.125306411Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=702.141µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.130382101Z level=info msg="Executing migration" id="Drop old table user_v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.131220184Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=837.683µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.137847498Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.139643346Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.794988ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.143641509Z level=info msg="Executing migration" id="Update user table charset" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.1436898Z level=info msg="Migration successfully executed" id="Update user table charset" duration=41.131µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.148636117Z level=info msg="Executing migration" id="Add last_seen_at column to user" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.149717994Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.086897ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.16090304Z level=info msg="Executing migration" id="Add missing user data" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.161200584Z level=info msg="Migration successfully executed" id="Add missing user data" duration=297.754µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.166444837Z level=info msg="Executing migration" id="Add is_disabled column to user" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.167656235Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.211048ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.174701086Z level=info msg="Executing migration" id="Add index user.login/user.email" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.17561081Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=911.944µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.183309331Z level=info msg="Executing migration" id="Add is_service_account column to user" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.184557Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.247519ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.189952405Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.201461585Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.51089ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.204797598Z level=info msg="Executing migration" id="Add uid column to user" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.205673272Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=876.284µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.208802081Z level=info msg="Executing migration" id="Update uid column values for users" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.209047505Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=245.044µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.216114875Z level=info msg="Executing migration" id="Add unique index user_uid" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.217391706Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.285121ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.224706791Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.225073486Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=366.135µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.228428429Z level=info msg="Executing migration" id="create temp user table v1-7" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.229289412Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=860.303µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.234847769Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.236093049Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.24596ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.240935664Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.242079203Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.143729ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.246494122Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.248340101Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.845049ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.258960698Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.25978497Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=827.042µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.266257212Z level=info msg="Executing migration" id="Update temp_user table charset" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.266311012Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=53.7µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.272297527Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.273172661Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=876.474µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.280455984Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.281764195Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.305031ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.286190814Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.286946707Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=755.693µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.293774343Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.294943982Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.169519ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.300887726Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.30434058Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.451835ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.310078959Z level=info msg="Executing migration" id="create temp_user v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.310972643Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=893.524µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.318145776Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.319607288Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.461522ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.34200067Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.343231629Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.232009ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.41338304Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.414620979Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.237499ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.432343807Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.433642927Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.2921ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.442244012Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.442635568Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=391.906µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.452114527Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.452731787Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=620.4µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.460310086Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.460676902Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=372.976µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.4733358Z level=info msg="Executing migration" id="create star table" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.474167904Z level=info msg="Migration successfully executed" id="create star table" duration=832.664µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.480851738Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.482071257Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.219049ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.485817326Z level=info msg="Executing migration" id="create org table v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.487010414Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.192158ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.492077474Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.493340363Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.262979ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.498784469Z level=info msg="Executing migration" id="create org_user table v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.500172191Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.386412ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.504198264Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.504899696Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=701.202µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.510893189Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.512031707Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.131098ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.517550434Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.518344316Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=791.312µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.523314344Z level=info msg="Executing migration" id="Update org table charset" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.523340724Z level=info msg="Migration successfully executed" id="Update org table charset" duration=27.14µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.526849709Z level=info msg="Executing migration" id="Update org_user table charset" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.526879399Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=29.68µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.531066176Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.53134299Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=277.214µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.539213003Z level=info msg="Executing migration" id="create dashboard table" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.540549674Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.336481ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.544806991Z level=info msg="Executing migration" id="add index dashboard.account_id" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.545574153Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=766.922µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.549401113Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.550313248Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=912.555µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.555728593Z level=info msg="Executing migration" id="create dashboard_tag table" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.556395213Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=666.79µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.562003261Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.562801483Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=795.652µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.567203533Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.568422021Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.218628ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.573421Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.578333897Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.912487ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.584714367Z level=info msg="Executing migration" id="create dashboard v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.585484409Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=767.092µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.599850764Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.601040823Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.26079ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.61039867Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.611708691Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.309571ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.646066779Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.64673933Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=674.181µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.656829368Z level=info msg="Executing migration" id="drop table dashboard_v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.657648741Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=823.693µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.666839895Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.666954797Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=116.132µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.671456267Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.673351928Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.895331ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.67738056Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.679080377Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.699567ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.682658703Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.684560303Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.90133ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.691303289Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.692178262Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=874.533µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.696914277Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.698842387Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.92815ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.705019354Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.705806376Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=786.792µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.712856127Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.714453742Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.597285ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.719231947Z level=info msg="Executing migration" id="Update dashboard table charset" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.719258197Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=26.95µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.723216859Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.72324727Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.041µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.729208444Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.731471549Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.265924ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.734979924Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.736876413Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.896489ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.745377387Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.747615612Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.237765ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.757201703Z level=info msg="Executing migration" id="Add column uid in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.760430493Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.22793ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.766023931Z level=info msg="Executing migration" id="Update uid column values in dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:23.766339016Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=327.035µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.04256325Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.043344581Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=783.752µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.054748581Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.055333439Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=584.938µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.065157789Z level=info msg="Executing migration" id="Update dashboard title length" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.06521006Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=52.971µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.069904642Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.070496419Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=591.517µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.078502505Z level=info msg="Executing migration" id="create dashboard_provisioning" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.079094304Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=591.529µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.088375196Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.092302718Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.927412ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.104354098Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.104936226Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=579.647µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.119452197Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.120081956Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=629.929µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.128129112Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.128814101Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=684.789µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.13326599Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.133534144Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=267.964µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.139073747Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.139682635Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=608.528µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.149313843Z level=info msg="Executing migration" id="Add check_sum column" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.15286129Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.548577ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.222129816Z level=info msg="Executing migration" id="Add index for dashboard_title" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.223587165Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.485589ms 12:42:04 kafka | ===> User 12:42:04 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 12:42:04 kafka | ===> Configuring ... 12:42:04 kafka | Running in Zookeeper mode... 12:42:04 kafka | ===> Running preflight checks ... 12:42:04 kafka | ===> Check if /var/lib/kafka/data is writable ... 12:42:04 kafka | ===> Check if Zookeeper is healthy ... 12:42:04 kafka | [2024-04-25 12:39:21,869] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,869] INFO Client environment:host.name=7b59f0a67712 (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,869] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,870] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,873] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:21,877] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 12:42:04 kafka | [2024-04-25 12:39:21,882] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 12:42:04 kafka | [2024-04-25 12:39:21,890] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 12:42:04 kafka | [2024-04-25 12:39:21,908] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 12:42:04 kafka | [2024-04-25 12:39:21,909] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 12:42:04 kafka | [2024-04-25 12:39:21,916] INFO Socket connection established, initiating session, client: /172.17.0.8:52748, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 12:42:04 kafka | [2024-04-25 12:39:21,988] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000609db0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 12:42:04 kafka | [2024-04-25 12:39:22,112] INFO Session: 0x100000609db0000 closed (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:22,112] INFO EventThread shut down for session: 0x100000609db0000 (org.apache.zookeeper.ClientCnxn) 12:42:04 kafka | Using log4j config /etc/kafka/log4j.properties 12:42:04 kafka | ===> Launching ... 12:42:04 kafka | ===> Launching kafka ... 12:42:04 kafka | [2024-04-25 12:39:22,884] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 12:42:04 kafka | [2024-04-25 12:39:23,182] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 12:42:04 kafka | [2024-04-25 12:39:23,246] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 12:42:04 kafka | [2024-04-25 12:39:23,247] INFO starting (kafka.server.KafkaServer) 12:42:04 kafka | [2024-04-25 12:39:23,247] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 12:42:04 kafka | [2024-04-25 12:39:23,259] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 12:42:04 kafka | [2024-04-25 12:39:23,262] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:23,262] INFO Client environment:host.name=7b59f0a67712 (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:23,262] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:23,262] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.233196823Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.233504307Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=307.274µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.244820036Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.245327683Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=507.337µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.25721235Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.258360896Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.147376ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.268815093Z level=info msg="Executing migration" id="Add isPublic for dashboard" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.27078979Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.979977ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.277556389Z level=info msg="Executing migration" id="create data_source table" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.27831871Z level=info msg="Migration successfully executed" id="create data_source table" duration=763.511µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.285764318Z level=info msg="Executing migration" id="add index data_source.account_id" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.287736344Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.970726ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.297738457Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.298864021Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.124924ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.305783602Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.306547483Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=758.611µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.312822646Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.313546045Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=723.659µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.322332261Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.327138435Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=4.808124ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.337586644Z level=info msg="Executing migration" id="create data_source table v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.338586197Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=999.463µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.346186027Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.34717347Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=987.683µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.352743774Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.353594136Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=850.272µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.359562805Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.360605678Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.042373ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.366163082Z level=info msg="Executing migration" id="Add column with_credentials" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.368579854Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.416472ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.383673903Z level=info msg="Executing migration" id="Add secure json data column" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.387668617Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.010964ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.394097561Z level=info msg="Executing migration" id="Update data_source table charset" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.394125242Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=25.081µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.396711876Z level=info msg="Executing migration" id="Update initial version to 1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.396914249Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=202.433µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.402829197Z level=info msg="Executing migration" id="Add read_only data column" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.407174215Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.344518ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.41214922Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.412375194Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=226.474µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.510300348Z level=info msg="Executing migration" id="Update json_data with nulls" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.510624014Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=327.726µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.519671123Z level=info msg="Executing migration" id="Add uid column" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.522153516Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.487483ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.529009606Z level=info msg="Executing migration" id="Update uid value" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.529251159Z level=info msg="Migration successfully executed" id="Update uid value" duration=243.753µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.535027576Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.535995399Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=968.483µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.539606657Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.540438687Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=831.91µs 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.549349065Z level=info msg="Executing migration" id="create api_key table" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.55040587Z level=info msg="Migration successfully executed" id="create api_key table" duration=2.027257ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.55650033Z level=info msg="Executing migration" id="add index api_key.account_id" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.557609874Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.109654ms 12:42:04 kafka | [2024-04-25 12:39:23,262] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:23,262] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 12:42:04 mariadb | 2024-04-25 12:39:13+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 12:42:04 mariadb | 2024-04-25 12:39:13+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 12:42:04 mariadb | 2024-04-25 12:39:13+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 12:42:04 mariadb | 2024-04-25 12:39:13+00:00 [Note] [Entrypoint]: Initializing database files 12:42:04 mariadb | 2024-04-25 12:39:13 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 12:42:04 mariadb | 2024-04-25 12:39:13 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 12:42:04 mariadb | 2024-04-25 12:39:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 12:42:04 mariadb | 12:42:04 mariadb | 12:42:04 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 12:42:04 mariadb | To do so, start the server, then issue the following command: 12:42:04 mariadb | 12:42:04 mariadb | '/usr/bin/mysql_secure_installation' 12:42:04 mariadb | 12:42:04 mariadb | which will also give you the option of removing the test 12:42:04 mariadb | databases and anonymous user created by default. This is 12:42:04 mariadb | strongly recommended for production servers. 12:42:04 mariadb | 12:42:04 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 12:42:04 mariadb | 12:42:04 mariadb | Please report any problems at https://mariadb.org/jira 12:42:04 mariadb | 12:42:04 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 12:42:04 mariadb | 12:42:04 mariadb | Consider joining MariaDB's strong and vibrant community: 12:42:04 mariadb | https://mariadb.org/get-involved/ 12:42:04 mariadb | 12:42:04 mariadb | 2024-04-25 12:39:17+00:00 [Note] [Entrypoint]: Database files initialized 12:42:04 mariadb | 2024-04-25 12:39:17+00:00 [Note] [Entrypoint]: Starting temporary server 12:42:04 mariadb | 2024-04-25 12:39:17+00:00 [Note] [Entrypoint]: Waiting for server startup 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | Waiting for mariadb port 3306... 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.561084901Z level=info msg="Executing migration" id="add index api_key.key" 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 100 ... 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.561647898Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=562.657µs 12:42:04 policy-db-migrator | Waiting for mariadb port 3306... 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 12:42:04 policy-api | Waiting for mariadb port 3306... 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | mariadb (172.17.0.2:3306) open 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.568358587Z level=info msg="Executing migration" id="add index api_key.account_id_name" 12:42:04 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 12:42:04 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Number of transaction pools: 1 12:42:04 prometheus | ts=2024-04-25T12:39:16.719Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 12:42:04 zookeeper | ===> User 12:42:04 policy-pap | Waiting for mariadb port 3306... 12:42:04 policy-api | mariadb (172.17.0.2:3306) open 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | Waiting for kafka port 9092... 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.569695305Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.336158ms 12:42:04 simulator | overriding logback.xml 12:42:04 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 12:42:04 prometheus | ts=2024-04-25T12:39:16.719Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 12:42:04 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 12:42:04 policy-pap | mariadb (172.17.0.2:3306) open 12:42:04 policy-api | Waiting for policy-db-migrator port 6824... 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | kafka (172.17.0.8:9092) open 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.573591566Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 12:42:04 simulator | 2024-04-25 12:39:21,243 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 12:42:04 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 12:42:04 prometheus | ts=2024-04-25T12:39:16.720Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 12:42:04 zookeeper | ===> Configuring ... 12:42:04 policy-pap | Waiting for kafka port 9092... 12:42:04 policy-api | policy-db-migrator (172.17.0.6:6824) open 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | Waiting for pap port 6969... 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.574394327Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=801.8µs 12:42:04 simulator | 2024-04-25 12:39:21,311 INFO org.onap.policy.models.simulators starting 12:42:04 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 12:42:04 prometheus | ts=2024-04-25T12:39:16.720Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 12:42:04 zookeeper | ===> Running preflight checks ... 12:42:04 policy-pap | kafka (172.17.0.8:9092) open 12:42:04 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | pap (172.17.0.10:6969) open 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.578179407Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 12:42:04 simulator | 2024-04-25 12:39:21,311 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 12:42:04 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 12:42:04 prometheus | ts=2024-04-25T12:39:16.720Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 12:42:04 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 12:42:04 policy-pap | Waiting for api port 6969... 12:42:04 policy-api | 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.578918517Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=746.56µs 12:42:04 simulator | 2024-04-25 12:39:21,486 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 12:42:04 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 12:42:04 prometheus | ts=2024-04-25T12:39:16.720Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 12:42:04 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 12:42:04 policy-pap | api (172.17.0.7:6969) open 12:42:04 policy-api | . ____ _ __ _ _ 12:42:04 kafka | [2024-04-25 12:39:23,263] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.440+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.58822048Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 12:42:04 simulator | 2024-04-25 12:39:21,487 INFO org.onap.policy.models.simulators starting A&AI simulator 12:42:04 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Completed initialization of buffer pool 12:42:04 prometheus | ts=2024-04-25T12:39:16.828Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 12:42:04 zookeeper | ===> Launching ... 12:42:04 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 12:42:04 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 12:42:04 kafka | [2024-04-25 12:39:23,265] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.600+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.58893511Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=720.811µs 12:42:04 simulator | 2024-04-25 12:39:21,613 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 12:42:04 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 12:42:04 prometheus | ts=2024-04-25T12:39:16.829Z caller=main.go:1129 level=info msg="Starting TSDB ..." 12:42:04 zookeeper | ===> Launching zookeeper ... 12:42:04 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 12:42:04 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 12:42:04 kafka | [2024-04-25 12:39:23,268] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 12:42:04 policy-apex-pdp | allow.auto.create.topics = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.598096551Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 12:42:04 simulator | 2024-04-25 12:39:21,625 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: 128 rollback segments are active. 12:42:04 prometheus | ts=2024-04-25T12:39:16.831Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 12:42:04 zookeeper | [2024-04-25 12:39:19,444] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 policy-pap | 12:42:04 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 12:42:04 kafka | [2024-04-25 12:39:23,273] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 12:42:04 policy-apex-pdp | auto.commit.interval.ms = 5000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.606804156Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.708605ms 12:42:04 simulator | 2024-04-25 12:39:21,628 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 policy-db-migrator | 321 blocks 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 12:42:04 prometheus | ts=2024-04-25T12:39:16.831Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 12:42:04 zookeeper | [2024-04-25 12:39:19,453] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 policy-pap | . ____ _ __ _ _ 12:42:04 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 12:42:04 kafka | [2024-04-25 12:39:23,274] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 12:42:04 policy-apex-pdp | auto.include.jmx.reporter = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.613353812Z level=info msg="Executing migration" id="create api_key table v2" 12:42:04 simulator | 2024-04-25 12:39:21,636 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 12:42:04 policy-db-migrator | Preparing upgrade release version: 0800 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 12:42:04 prometheus | ts=2024-04-25T12:39:16.836Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 12:42:04 policy-api | =========|_|==============|___/=/_/_/_/ 12:42:04 policy-apex-pdp | auto.offset.reset = latest 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.61395152Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=594.558µs 12:42:04 simulator | 2024-04-25 12:39:21,710 INFO Session workerName=node0 12:42:04 policy-db-migrator | Preparing upgrade release version: 0900 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] InnoDB: log sequence number 46590; transaction id 14 12:42:04 prometheus | ts=2024-04-25T12:39:16.836Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=4.29µs 12:42:04 prometheus | ts=2024-04-25T12:39:16.836Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 12:42:04 kafka | [2024-04-25 12:39:23,277] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 12:42:04 policy-api | :: Spring Boot :: (v3.1.10) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.618178286Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 12:42:04 simulator | 2024-04-25 12:39:22,246 INFO Using GSON for REST calls 12:42:04 policy-db-migrator | Preparing upgrade release version: 1000 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] Plugin 'FEEDBACK' is disabled. 12:42:04 zookeeper | [2024-04-25 12:39:19,453] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 prometheus | ts=2024-04-25T12:39:16.838Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 12:42:04 kafka | [2024-04-25 12:39:23,284] INFO Socket connection established, initiating session, client: /172.17.0.8:52750, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 12:42:04 policy-apex-pdp | bootstrap.servers = [kafka:9092] 12:42:04 policy-api | 12:42:04 simulator | 2024-04-25 12:39:22,312 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 12:42:04 policy-db-migrator | Preparing upgrade release version: 1100 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 12:42:04 zookeeper | [2024-04-25 12:39:19,453] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 prometheus | ts=2024-04-25T12:39:16.838Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=1.257651ms wal_replay_duration=1.061358ms wbl_replay_duration=370ns total_replay_duration=2.36672ms 12:42:04 kafka | [2024-04-25 12:39:23,296] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000609db0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.619987671Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.809304ms 12:42:04 policy-apex-pdp | check.crcs = true 12:42:04 policy-api | [2024-04-25T12:39:35.837+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 12:42:04 simulator | 2024-04-25 12:39:22,318 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 12:42:04 policy-db-migrator | Preparing upgrade release version: 1200 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 12:42:04 zookeeper | [2024-04-25 12:39:19,453] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 prometheus | ts=2024-04-25T12:39:16.842Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 12:42:04 kafka | [2024-04-25 12:39:23,302] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.625798047Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 12:42:04 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 12:42:04 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 12:42:04 policy-api | [2024-04-25T12:39:35.901+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 26 (/app/api.jar started by policy in /opt/app/policy/api/bin) 12:42:04 simulator | 2024-04-25 12:39:22,324 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1585ms 12:42:04 policy-db-migrator | Preparing upgrade release version: 1300 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 12:42:04 zookeeper | [2024-04-25 12:39:19,455] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 12:42:04 prometheus | ts=2024-04-25T12:39:16.842Z caller=main.go:1153 level=info msg="TSDB started" 12:42:04 kafka | [2024-04-25 12:39:23,653] INFO Cluster ID = 6HLElDkITkKpDhaqvETosg (kafka.server.KafkaServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.626914252Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.116305ms 12:42:04 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 12:42:04 policy-apex-pdp | client.id = consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-1 12:42:04 policy-api | [2024-04-25T12:39:35.902+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 12:42:04 simulator | 2024-04-25 12:39:22,325 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4304 ms. 12:42:04 policy-db-migrator | Done 12:42:04 mariadb | 2024-04-25 12:39:17 0 [Note] mariadbd: ready for connections. 12:42:04 zookeeper | [2024-04-25 12:39:19,455] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 12:42:04 prometheus | ts=2024-04-25T12:39:16.842Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 12:42:04 kafka | [2024-04-25 12:39:23,658] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.637892857Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 12:42:04 policy-apex-pdp | client.rack = 12:42:04 policy-apex-pdp | connections.max.idle.ms = 540000 12:42:04 policy-api | [2024-04-25T12:39:37.851+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 12:42:04 simulator | 2024-04-25 12:39:22,332 INFO org.onap.policy.models.simulators starting SDNC simulator 12:42:04 policy-db-migrator | name version 12:42:04 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 12:42:04 zookeeper | [2024-04-25 12:39:19,455] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 12:42:04 prometheus | ts=2024-04-25T12:39:16.844Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.514026ms db_storage=1.94µs remote_storage=2.73µs web_handler=920ns query_engine=1.5µs scrape=490.548µs scrape_sd=210.573µs notify=35.211µs notify_sd=23.31µs rules=3.1µs tracing=6.611µs 12:42:04 kafka | [2024-04-25 12:39:23,709] INFO KafkaConfig values: 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.638743578Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=852.751µs 12:42:04 policy-apex-pdp | default.api.timeout.ms = 60000 12:42:04 policy-apex-pdp | enable.auto.commit = true 12:42:04 policy-api | [2024-04-25T12:39:37.936+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 76 ms. Found 6 JPA repository interfaces. 12:42:04 simulator | 2024-04-25 12:39:22,335 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 12:42:04 policy-db-migrator | policyadmin 0 12:42:04 mariadb | 2024-04-25 12:39:18+00:00 [Note] [Entrypoint]: Temporary server started. 12:42:04 zookeeper | [2024-04-25 12:39:19,455] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 12:42:04 prometheus | ts=2024-04-25T12:39:16.844Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 12:42:04 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.644391143Z level=info msg="Executing migration" id="copy api_key v1 to v2" 12:42:04 policy-apex-pdp | exclude.internal.topics = true 12:42:04 policy-apex-pdp | fetch.max.bytes = 52428800 12:42:04 policy-api | [2024-04-25T12:39:38.356+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 12:42:04 simulator | 2024-04-25 12:39:22,336 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 zookeeper | [2024-04-25 12:39:19,459] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 12:42:04 prometheus | ts=2024-04-25T12:39:16.844Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 12:42:04 kafka | alter.config.policy.class.name = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.644729507Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=339.414µs 12:42:04 policy-apex-pdp | fetch.max.wait.ms = 500 12:42:04 policy-apex-pdp | fetch.min.bytes = 1 12:42:04 policy-api | [2024-04-25T12:39:38.359+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 12:42:04 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 12:42:04 mariadb | 2024-04-25 12:39:20+00:00 [Note] [Entrypoint]: Creating user policy_user 12:42:04 simulator | 2024-04-25 12:39:22,336 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 zookeeper | [2024-04-25 12:39:19,459] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 kafka | alter.log.dirs.replication.quota.window.num = 11 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.649746674Z level=info msg="Executing migration" id="Drop old table api_key_v1" 12:42:04 policy-apex-pdp | group.id = 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 12:42:04 policy-apex-pdp | group.instance.id = null 12:42:04 policy-api | [2024-04-25T12:39:38.981+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 12:42:04 policy-db-migrator | upgrade: 0 -> 1300 12:42:04 mariadb | 2024-04-25 12:39:20+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 12:42:04 simulator | 2024-04-25 12:39:22,337 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 12:42:04 zookeeper | [2024-04-25 12:39:19,460] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 zookeeper | [2024-04-25 12:39:19,460] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.650295001Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=547.127µs 12:42:04 policy-apex-pdp | heartbeat.interval.ms = 3000 12:42:04 policy-apex-pdp | interceptor.classes = [] 12:42:04 policy-api | [2024-04-25T12:39:38.991+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 12:42:04 policy-db-migrator | 12:42:04 mariadb | 12:42:04 simulator | 2024-04-25 12:39:22,342 INFO Session workerName=node0 12:42:04 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 12:42:04 zookeeper | [2024-04-25 12:39:19,460] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.656484473Z level=info msg="Executing migration" id="Update api_key table charset" 12:42:04 policy-apex-pdp | internal.leave.group.on.close = true 12:42:04 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 12:42:04 policy-api | [2024-04-25T12:39:38.993+00:00|INFO|StandardService|main] Starting service [Tomcat] 12:42:04 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 12:42:04 mariadb | 12:42:04 simulator | 2024-04-25 12:39:22,402 INFO Using GSON for REST calls 12:42:04 kafka | authorizer.class.name = 12:42:04 zookeeper | [2024-04-25 12:39:19,460] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.656509644Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=25.741µs 12:42:04 policy-apex-pdp | isolation.level = read_uncommitted 12:42:04 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-api | [2024-04-25T12:39:38.993+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | 2024-04-25 12:39:20+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 12:42:04 simulator | 2024-04-25 12:39:22,412 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 12:42:04 kafka | auto.create.topics.enable = true 12:42:04 zookeeper | [2024-04-25 12:39:19,460] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.660072941Z level=info msg="Executing migration" id="Add expires to api_key table" 12:42:04 policy-apex-pdp | max.partition.fetch.bytes = 1048576 12:42:04 policy-apex-pdp | max.poll.interval.ms = 300000 12:42:04 policy-api | [2024-04-25T12:39:39.088+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 12:42:04 mariadb | 2024-04-25 12:39:20+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 12:42:04 simulator | 2024-04-25 12:39:22,414 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 12:42:04 kafka | auto.include.jmx.reporter = true 12:42:04 zookeeper | [2024-04-25 12:39:19,472] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@77eca502 (org.apache.zookeeper.server.ServerMetrics) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.662814797Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.741036ms 12:42:04 policy-apex-pdp | max.poll.records = 500 12:42:04 policy-apex-pdp | metadata.max.age.ms = 300000 12:42:04 policy-api | [2024-04-25T12:39:39.089+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3117 ms 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | #!/bin/bash -xv 12:42:04 simulator | 2024-04-25 12:39:22,415 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @1675ms 12:42:04 kafka | auto.leader.rebalance.enable = true 12:42:04 zookeeper | [2024-04-25 12:39:19,475] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.670990295Z level=info msg="Executing migration" id="Add service account foreign key" 12:42:04 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 12:42:04 policy-apex-pdp | metric.reporters = [] 12:42:04 policy-api | [2024-04-25T12:39:39.509+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 12:42:04 policy-db-migrator | 12:42:04 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 12:42:04 simulator | 2024-04-25 12:39:22,415 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4921 ms. 12:42:04 kafka | background.threads = 10 12:42:04 zookeeper | [2024-04-25 12:39:19,475] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.673059683Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.070638ms 12:42:04 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 12:42:04 policy-apex-pdp | metrics.num.samples = 2 12:42:04 policy-api | [2024-04-25T12:39:39.583+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 12:42:04 policy-db-migrator | 12:42:04 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 12:42:04 simulator | 2024-04-25 12:39:22,416 INFO org.onap.policy.models.simulators starting SO simulator 12:42:04 kafka | broker.heartbeat.interval.ms = 2000 12:42:04 zookeeper | [2024-04-25 12:39:19,477] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.679272625Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 12:42:04 policy-pap | =========|_|==============|___/=/_/_/_/ 12:42:04 policy-apex-pdp | metrics.recording.level = INFO 12:42:04 policy-api | [2024-04-25T12:39:39.644+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 12:42:04 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 12:42:04 mariadb | # 12:42:04 simulator | 2024-04-25 12:39:22,418 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 12:42:04 kafka | broker.id = 1 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.679423917Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=151.632µs 12:42:04 policy-pap | :: Spring Boot :: (v3.1.10) 12:42:04 policy-apex-pdp | metrics.sample.window.ms = 30000 12:42:04 policy-api | [2024-04-25T12:39:39.933+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 12:42:04 simulator | 2024-04-25 12:39:22,418 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 kafka | broker.id.generation.enable = true 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.682443407Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 12:42:04 policy-pap | 12:42:04 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:42:04 policy-api | [2024-04-25T12:39:39.965+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 12:42:04 mariadb | # you may not use this file except in compliance with the License. 12:42:04 simulator | 2024-04-25 12:39:22,419 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 kafka | broker.rack = null 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.686738523Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.295096ms 12:42:04 policy-pap | [2024-04-25T12:39:52.207+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 12:42:04 policy-apex-pdp | receive.buffer.bytes = 65536 12:42:04 policy-api | [2024-04-25T12:39:40.065+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@26844abb 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | # You may obtain a copy of the License at 12:42:04 simulator | 2024-04-25 12:39:22,420 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 12:42:04 kafka | broker.session.timeout.ms = 9000 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.690592175Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 12:42:04 policy-pap | [2024-04-25T12:39:52.275+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 40 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 12:42:04 policy-apex-pdp | reconnect.backoff.max.ms = 1000 12:42:04 policy-api | [2024-04-25T12:39:40.066+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 12:42:04 policy-db-migrator | 12:42:04 mariadb | # 12:42:04 simulator | 2024-04-25 12:39:22,448 INFO Session workerName=node0 12:42:04 kafka | client.quota.callback.class = null 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.693097168Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.504933ms 12:42:04 policy-pap | [2024-04-25T12:39:52.276+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 12:42:04 policy-apex-pdp | reconnect.backoff.ms = 50 12:42:04 policy-api | [2024-04-25T12:39:42.060+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 12:42:04 policy-db-migrator | 12:42:04 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 12:42:04 simulator | 2024-04-25 12:39:22,509 INFO Using GSON for REST calls 12:42:04 kafka | compression.type = producer 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.697528876Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 12:42:04 policy-pap | [2024-04-25T12:39:54.236+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 12:42:04 policy-apex-pdp | request.timeout.ms = 30000 12:42:04 policy-api | [2024-04-25T12:39:42.063+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 12:42:04 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 12:42:04 mariadb | # 12:42:04 simulator | 2024-04-25 12:39:22,521 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 12:42:04 kafka | connection.failed.authentication.delay.ms = 100 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.698246756Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=717.77µs 12:42:04 policy-pap | [2024-04-25T12:39:54.323+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 7 JPA repository interfaces. 12:42:04 policy-apex-pdp | retry.backoff.ms = 100 12:42:04 policy-api | [2024-04-25T12:39:43.211+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | # Unless required by applicable law or agreed to in writing, software 12:42:04 simulator | 2024-04-25 12:39:22,526 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 12:42:04 kafka | connections.max.idle.ms = 600000 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.706751049Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 12:42:04 policy-pap | [2024-04-25T12:39:54.727+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 12:42:04 policy-apex-pdp | sasl.client.callback.handler.class = null 12:42:04 policy-api | [2024-04-25T12:39:44.821+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 12:42:04 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 12:42:04 simulator | 2024-04-25 12:39:22,526 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @1787ms 12:42:04 kafka | connections.max.reauth.ms = 0 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.707728091Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=976.832µs 12:42:04 policy-pap | [2024-04-25T12:39:54.728+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 12:42:04 policy-apex-pdp | sasl.jaas.config = null 12:42:04 policy-api | [2024-04-25T12:39:46.004+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12:42:04 simulator | 2024-04-25 12:39:22,526 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4893 ms. 12:42:04 kafka | control.plane.listener.name = null 12:42:04 zookeeper | [2024-04-25 12:39:19,488] INFO (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.716549738Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 12:42:04 policy-pap | [2024-04-25T12:39:55.322+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 12:42:04 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 policy-api | [2024-04-25T12:39:46.235+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@134c329a, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1c277413, org.springframework.security.web.context.SecurityContextHolderFilter@3033e54c, org.springframework.security.web.header.HeaderWriterFilter@7908e69e, org.springframework.security.web.authentication.logout.LogoutFilter@635ad140, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@463bdee9, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@10e5c13c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@796ed904, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@1e9b1d9f, org.springframework.security.web.access.ExceptionTranslationFilter@6ef0a044, org.springframework.security.web.access.intercept.AuthorizationFilter@631c244c] 12:42:04 policy-db-migrator | 12:42:04 mariadb | # See the License for the specific language governing permissions and 12:42:04 simulator | 2024-04-25 12:39:22,527 INFO org.onap.policy.models.simulators starting VFC simulator 12:42:04 kafka | controlled.shutdown.enable = true 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.717784795Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.234817ms 12:42:04 policy-pap | [2024-04-25T12:39:55.332+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 12:42:04 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 policy-api | [2024-04-25T12:39:47.163+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 12:42:04 policy-db-migrator | 12:42:04 mariadb | # limitations under the License. 12:42:04 simulator | 2024-04-25 12:39:22,529 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 12:42:04 kafka | controlled.shutdown.max.retries = 3 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:host.name=070cd7ecdd50 (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.725493716Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 12:42:04 policy-pap | [2024-04-25T12:39:55.335+00:00|INFO|StandardService|main] Starting service [Tomcat] 12:42:04 policy-apex-pdp | sasl.kerberos.service.name = null 12:42:04 policy-api | [2024-04-25T12:39:47.275+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 12:42:04 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 12:42:04 mariadb | 12:42:04 simulator | 2024-04-25 12:39:22,529 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 kafka | controlled.shutdown.retry.backoff.ms = 5000 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.726755523Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.261247ms 12:42:04 policy-pap | [2024-04-25T12:39:55.335+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 12:42:04 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 policy-api | [2024-04-25T12:39:47.294+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:42:04 simulator | 2024-04-25 12:39:22,531 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 kafka | controller.listener.names = null 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.734258012Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 12:42:04 policy-pap | [2024-04-25T12:39:55.427+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 12:42:04 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 policy-api | [2024-04-25T12:39:47.312+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.187 seconds (process running for 12.808) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 12:42:04 mariadb | do 12:42:04 simulator | 2024-04-25 12:39:22,532 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 12:42:04 kafka | controller.quorum.append.linger.ms = 25 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.735658551Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.395599ms 12:42:04 policy-pap | [2024-04-25T12:39:55.427+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3071 ms 12:42:04 policy-apex-pdp | sasl.login.callback.handler.class = null 12:42:04 policy-api | [2024-04-25T12:40:06.944+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 12:42:04 simulator | 2024-04-25 12:39:22,537 INFO Session workerName=node0 12:42:04 kafka | controller.quorum.election.backoff.max.ms = 1000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.827475686Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 12:42:04 policy-pap | [2024-04-25T12:39:55.844+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 policy-apex-pdp | sasl.login.class = null 12:42:04 policy-api | [2024-04-25T12:40:06.944+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 12:42:04 policy-db-migrator | 12:42:04 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 12:42:04 simulator | 2024-04-25 12:39:22,597 INFO Using GSON for REST calls 12:42:04 kafka | controller.quorum.election.timeout.ms = 1000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.828433688Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=959.842µs 12:42:04 policy-pap | [2024-04-25T12:39:55.903+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 policy-apex-pdp | sasl.login.connect.timeout.ms = null 12:42:04 policy-api | [2024-04-25T12:40:06.946+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 12:42:04 policy-db-migrator | 12:42:04 mariadb | done 12:42:04 simulator | 2024-04-25 12:39:22,608 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 12:42:04 kafka | controller.quorum.fetch.timeout.ms = 2000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.838505102Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 12:42:04 policy-pap | [2024-04-25T12:39:56.235+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 policy-apex-pdp | sasl.login.read.timeout.ms = null 12:42:04 policy-api | [2024-04-25T12:40:07.290+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 12:42:04 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 12:42:04 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:42:04 simulator | 2024-04-25 12:39:22,612 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 12:42:04 kafka | controller.quorum.request.timeout.ms = 2000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.838613643Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=109.671µs 12:42:04 policy-pap | [2024-04-25T12:39:56.329+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee5b2d9 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 12:42:04 policy-api | [] 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 12:42:04 simulator | 2024-04-25 12:39:22,613 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @1873ms 12:42:04 kafka | controller.quorum.retry.backoff.ms = 20 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.843785802Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 12:42:04 policy-pap | [2024-04-25T12:39:56.331+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 12:42:04 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:42:04 kafka | controller.quorum.voters = [] 12:42:04 simulator | 2024-04-25 12:39:22,613 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4917 ms. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.843822742Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=38.33µs 12:42:04 policy-pap | [2024-04-25T12:39:56.362+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:42:04 kafka | controller.quota.window.num = 11 12:42:04 simulator | 2024-04-25 12:39:22,613 INFO org.onap.policy.models.simulators started 12:42:04 policy-pap | [2024-04-25T12:39:57.789+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.847805565Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 12:42:04 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 12:42:04 policy-db-migrator | 12:42:04 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 12:42:04 kafka | controller.quota.window.size.seconds = 1 12:42:04 policy-pap | [2024-04-25T12:39:57.798+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.852290214Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.476259ms 12:42:04 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | 12:42:04 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:42:04 kafka | controller.socket.timeout.ms = 30000 12:42:04 kafka | create.topic.policy.class.name = null 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.855634538Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 12:42:04 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 12:42:04 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 12:42:04 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:42:04 policy-pap | [2024-04-25T12:39:58.245+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 12:42:04 kafka | default.replication.factor = 1 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.858427345Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.792097ms 12:42:04 policy-apex-pdp | sasl.mechanism = GSSAPI 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 12:42:04 policy-pap | [2024-04-25T12:39:58.642+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 12:42:04 kafka | delegation.token.expiry.check.interval.ms = 3600000 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.864932221Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 12:42:04 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 12:42:04 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:42:04 policy-pap | [2024-04-25T12:39:58.770+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 12:42:04 kafka | delegation.token.expiry.time.ms = 86400000 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.864994022Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=62.391µs 12:42:04 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:42:04 policy-pap | [2024-04-25T12:39:59.037+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:42:04 kafka | delegation.token.master.key = null 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.8716972Z level=info msg="Executing migration" id="create quota table v1" 12:42:04 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 12:42:04 policy-db-migrator | 12:42:04 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 12:42:04 policy-pap | allow.auto.create.topics = true 12:42:04 kafka | delegation.token.max.lifetime.ms = 604800000 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.872804645Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.110195ms 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 policy-db-migrator | 12:42:04 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:42:04 policy-pap | auto.commit.interval.ms = 5000 12:42:04 kafka | delegation.token.secret.key = null 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.879842228Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 12:42:04 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:42:04 policy-pap | auto.include.jmx.reporter = true 12:42:04 kafka | delete.records.purgatory.purge.interval.requests = 1 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.881128556Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.294328ms 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 12:42:04 policy-pap | auto.offset.reset = latest 12:42:04 kafka | delete.topic.enable = true 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.89202823Z level=info msg="Executing migration" id="Update quota table charset" 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:42:04 policy-pap | bootstrap.servers = [kafka:9092] 12:42:04 kafka | early.start.listeners = null 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:24.89206922Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.59µs 12:42:04 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 12:42:04 policy-db-migrator | -------------- 12:42:04 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 12:42:04 policy-pap | check.crcs = true 12:42:04 kafka | fetch.max.bytes = 57671680 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.029255745Z level=info msg="Executing migration" id="create plugin_setting table" 12:42:04 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 12:42:04 policy-db-migrator | 12:42:04 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 12:42:04 policy-pap | client.dns.lookup = use_all_dns_ips 12:42:04 kafka | fetch.purgatory.purge.interval.requests = 1000 12:42:04 zookeeper | [2024-04-25 12:39:19,490] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.030622593Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.372628ms 12:42:04 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 12:42:04 policy-db-migrator | 12:42:04 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 12:42:04 policy-pap | client.id = consumer-53d3b957-3026-4843-bc4f-55d426241089-1 12:42:04 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 12:42:04 zookeeper | [2024-04-25 12:39:19,492] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.03575068Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 12:42:04 policy-apex-pdp | security.protocol = PLAINTEXT 12:42:04 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 12:42:04 mariadb | 12:42:04 policy-pap | client.rack = 12:42:04 kafka | group.consumer.heartbeat.interval.ms = 5000 12:42:04 zookeeper | [2024-04-25 12:39:19,492] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.036610101Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=859.321µs 12:42:04 policy-apex-pdp | security.providers = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | connections.max.idle.ms = 540000 12:42:04 kafka | group.consumer.max.heartbeat.interval.ms = 15000 12:42:04 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 12:42:04 zookeeper | [2024-04-25 12:39:19,493] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.041365355Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 12:42:04 policy-apex-pdp | send.buffer.bytes = 131072 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 12:42:04 policy-pap | default.api.timeout.ms = 60000 12:42:04 kafka | group.consumer.max.session.timeout.ms = 60000 12:42:04 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 12:42:04 zookeeper | [2024-04-25 12:39:19,493] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.046062357Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.695532ms 12:42:04 policy-apex-pdp | session.timeout.ms = 45000 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | enable.auto.commit = true 12:42:04 kafka | group.consumer.max.size = 2147483647 12:42:04 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 12:42:04 zookeeper | [2024-04-25 12:39:19,493] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 12:42:04 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 12:42:04 policy-db-migrator | 12:42:04 policy-pap | exclude.internal.topics = true 12:42:04 kafka | group.consumer.min.heartbeat.interval.ms = 5000 12:42:04 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.054004052Z level=info msg="Executing migration" id="Update plugin_setting table charset" 12:42:04 zookeeper | [2024-04-25 12:39:19,494] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:42:04 policy-db-migrator | 12:42:04 policy-pap | fetch.max.bytes = 52428800 12:42:04 kafka | group.consumer.min.session.timeout.ms = 45000 12:42:04 mariadb | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.054025033Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=22.541µs 12:42:04 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 12:42:04 kafka | group.consumer.session.timeout.ms = 45000 12:42:04 mariadb | 2024-04-25 12:39:21+00:00 [Note] [Entrypoint]: Stopping temporary server 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.059661597Z level=info msg="Executing migration" id="create session table" 12:42:04 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 12:42:04 zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:42:04 policy-pap | fetch.max.wait.ms = 500 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | group.coordinator.new.enable = false 12:42:04 mariadb | 2024-04-25 12:39:21 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.060969774Z level=info msg="Migration successfully executed" id="create session table" duration=1.308537ms 12:42:04 policy-apex-pdp | ssl.cipher.suites = null 12:42:04 zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:42:04 zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 kafka | group.coordinator.threads = 1 12:42:04 mariadb | 2024-04-25 12:39:21 0 [Note] InnoDB: FTS optimize thread exiting. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.067085794Z level=info msg="Executing migration" id="Drop old table playlist table" 12:42:04 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 policy-pap | fetch.min.bytes = 1 12:42:04 policy-db-migrator | -------------- 12:42:04 zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:42:04 kafka | group.initial.rebalance.delay.ms = 3000 12:42:04 mariadb | 2024-04-25 12:39:21 0 [Note] InnoDB: Starting shutdown... 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.067209816Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=124.752µs 12:42:04 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 12:42:04 policy-pap | group.id = 53d3b957-3026-4843-bc4f-55d426241089 12:42:04 policy-db-migrator | 12:42:04 zookeeper | [2024-04-25 12:39:19,495] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 12:42:04 kafka | group.max.session.timeout.ms = 1800000 12:42:04 mariadb | 2024-04-25 12:39:21 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.078438604Z level=info msg="Executing migration" id="Drop old table playlist_item table" 12:42:04 policy-apex-pdp | ssl.engine.factory.class = null 12:42:04 policy-pap | group.instance.id = null 12:42:04 policy-db-migrator | 12:42:04 zookeeper | [2024-04-25 12:39:19,498] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 kafka | group.max.size = 2147483647 12:42:04 mariadb | 2024-04-25 12:39:21 0 [Note] InnoDB: Buffer pool(s) dump completed at 240425 12:39:21 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.078561026Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=124.812µs 12:42:04 policy-apex-pdp | ssl.key.password = null 12:42:04 policy-pap | heartbeat.interval.ms = 3000 12:42:04 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 12:42:04 zookeeper | [2024-04-25 12:39:19,498] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 kafka | group.min.session.timeout.ms = 6000 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.084269681Z level=info msg="Executing migration" id="create playlist table v2" 12:42:04 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 12:42:04 policy-pap | interceptor.classes = [] 12:42:04 policy-db-migrator | -------------- 12:42:04 zookeeper | [2024-04-25 12:39:19,498] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 12:42:04 kafka | initial.broker.registration.timeout.ms = 60000 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Shutdown completed; log sequence number 381915; transaction id 298 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.085374077Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.104156ms 12:42:04 policy-apex-pdp | ssl.keystore.certificate.chain = null 12:42:04 policy-pap | internal.leave.group.on.close = true 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 zookeeper | [2024-04-25 12:39:19,499] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 12:42:04 kafka | inter.broker.listener.name = PLAINTEXT 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] mariadbd: Shutdown complete 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.089850755Z level=info msg="Executing migration" id="create playlist item table v2" 12:42:04 policy-apex-pdp | ssl.keystore.key = null 12:42:04 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 12:42:04 policy-db-migrator | -------------- 12:42:04 zookeeper | [2024-04-25 12:39:19,499] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 kafka | inter.broker.protocol.version = 3.6-IV2 12:42:04 mariadb | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.09094708Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.096215ms 12:42:04 policy-apex-pdp | ssl.keystore.location = null 12:42:04 policy-pap | isolation.level = read_uncommitted 12:42:04 policy-db-migrator | 12:42:04 zookeeper | [2024-04-25 12:39:19,521] INFO Logging initialized @590ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 12:42:04 kafka | kafka.metrics.polling.interval.secs = 10 12:42:04 mariadb | 2024-04-25 12:39:22+00:00 [Note] [Entrypoint]: Temporary server stopped 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.106289032Z level=info msg="Executing migration" id="Update playlist table charset" 12:42:04 policy-apex-pdp | ssl.keystore.password = null 12:42:04 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-db-migrator | 12:42:04 zookeeper | [2024-04-25 12:39:19,621] WARN o.e.j.s.ServletContextHandler@6d5620ce{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 12:42:04 kafka | kafka.metrics.reporters = [] 12:42:04 mariadb | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.106493965Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=207.933µs 12:42:04 policy-apex-pdp | ssl.keystore.type = JKS 12:42:04 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 12:42:04 kafka | leader.imbalance.check.interval.seconds = 300 12:42:04 mariadb | 2024-04-25 12:39:22+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.122891082Z level=info msg="Executing migration" id="Update playlist_item table charset" 12:42:04 policy-apex-pdp | ssl.protocol = TLSv1.3 12:42:04 policy-pap | max.partition.fetch.bytes = 1048576 12:42:04 zookeeper | [2024-04-25 12:39:19,622] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 12:42:04 kafka | leader.imbalance.per.broker.percentage = 10 12:42:04 mariadb | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.123013334Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=125.182µs 12:42:04 policy-apex-pdp | ssl.provider = null 12:42:04 policy-pap | max.poll.interval.ms = 300000 12:42:04 policy-db-migrator | -------------- 12:42:04 zookeeper | [2024-04-25 12:39:19,645] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 12:42:04 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.127354601Z level=info msg="Executing migration" id="Add playlist column created_at" 12:42:04 policy-apex-pdp | ssl.secure.random.implementation = null 12:42:04 policy-pap | max.poll.records = 500 12:42:04 policy-pap | metadata.max.age.ms = 300000 12:42:04 zookeeper | [2024-04-25 12:39:19,675] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 12:42:04 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.130680345Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.326524ms 12:42:04 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 12:42:04 policy-pap | metric.reporters = [] 12:42:04 policy-pap | metrics.num.samples = 2 12:42:04 zookeeper | [2024-04-25 12:39:19,675] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 12:42:04 kafka | log.cleaner.backoff.ms = 15000 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Number of transaction pools: 1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.135096763Z level=info msg="Executing migration" id="Add playlist column updated_at" 12:42:04 policy-apex-pdp | ssl.truststore.certificates = null 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 policy-pap | metrics.recording.level = INFO 12:42:04 zookeeper | [2024-04-25 12:39:19,676] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 12:42:04 kafka | log.cleaner.dedupe.buffer.size = 134217728 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.138426428Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.329255ms 12:42:04 policy-apex-pdp | ssl.truststore.location = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | metrics.sample.window.ms = 30000 12:42:04 zookeeper | [2024-04-25 12:39:19,680] WARN ServletContext@o.e.j.s.ServletContextHandler@6d5620ce{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 12:42:04 kafka | log.cleaner.delete.retention.ms = 86400000 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.144807692Z level=info msg="Executing migration" id="drop preferences table v2" 12:42:04 policy-apex-pdp | ssl.truststore.password = null 12:42:04 policy-db-migrator | 12:42:04 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:42:04 zookeeper | [2024-04-25 12:39:19,688] INFO Started o.e.j.s.ServletContextHandler@6d5620ce{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 12:42:04 kafka | log.cleaner.enable = true 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.144977744Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=169.772µs 12:42:04 policy-apex-pdp | ssl.truststore.type = JKS 12:42:04 policy-db-migrator | 12:42:04 policy-pap | receive.buffer.bytes = 65536 12:42:04 zookeeper | [2024-04-25 12:39:19,702] INFO Started ServerConnector@4d1bf319{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 12:42:04 kafka | log.cleaner.io.buffer.load.factor = 0.9 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.15453194Z level=info msg="Executing migration" id="drop preferences table v3" 12:42:04 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 12:42:04 policy-pap | reconnect.backoff.max.ms = 1000 12:42:04 zookeeper | [2024-04-25 12:39:19,703] INFO Started @772ms (org.eclipse.jetty.server.Server) 12:42:04 kafka | log.cleaner.io.buffer.size = 524288 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.154784044Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=252.834µs 12:42:04 policy-apex-pdp | 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | reconnect.backoff.ms = 50 12:42:04 zookeeper | [2024-04-25 12:39:19,703] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 12:42:04 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Completed initialization of buffer pool 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.15904Z level=info msg="Executing migration" id="create preferences table v3" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.756+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 12:42:04 policy-pap | request.timeout.ms = 30000 12:42:04 zookeeper | [2024-04-25 12:39:19,708] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 12:42:04 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.159941081Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=901.161µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.756+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | retry.backoff.ms = 100 12:42:04 zookeeper | [2024-04-25 12:39:19,709] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 12:42:04 kafka | log.cleaner.min.cleanable.ratio = 0.5 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: 128 rollback segments are active. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.163486259Z level=info msg="Executing migration" id="Update preferences table charset" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.756+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048802755 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.client.callback.handler.class = null 12:42:04 zookeeper | [2024-04-25 12:39:19,711] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 12:42:04 kafka | log.cleaner.min.compaction.lag.ms = 0 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.16355201Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=66.051µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.758+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-1, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Subscribed to topic(s): policy-pdp-pap 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.jaas.config = null 12:42:04 zookeeper | [2024-04-25 12:39:19,713] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 12:42:04 kafka | log.cleaner.threads = 1 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.169811082Z level=info msg="Executing migration" id="Add column team_id in preferences" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.769+00:00|INFO|ServiceManager|main] service manager starting 12:42:04 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 12:42:04 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 zookeeper | [2024-04-25 12:39:19,727] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 12:42:04 kafka | log.cleanup.policy = [delete] 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: log sequence number 381915; transaction id 299 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.175083812Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.27232ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.769+00:00|INFO|ServiceManager|main] service manager starting topics 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 zookeeper | [2024-04-25 12:39:19,727] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 12:42:04 kafka | log.dir = /tmp/kafka-logs 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] Plugin 'FEEDBACK' is disabled. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.183838508Z level=info msg="Executing migration" id="Update team_id column values in preferences" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.770+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4b79aeb3-604a-4e33-80d9-cdeedf19ce63, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 policy-pap | sasl.kerberos.service.name = null 12:42:04 zookeeper | [2024-04-25 12:39:19,729] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 12:42:04 kafka | log.dirs = /var/lib/kafka/data 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.184092541Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=253.393µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.789+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 zookeeper | [2024-04-25 12:39:19,729] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 12:42:04 kafka | log.flush.interval.messages = 9223372036854775807 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.194551769Z level=info msg="Executing migration" id="Add column week_start in preferences" 12:42:04 policy-apex-pdp | allow.auto.create.topics = true 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 zookeeper | [2024-04-25 12:39:19,733] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 12:42:04 kafka | log.flush.interval.ms = null 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.19986359Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.311061ms 12:42:04 policy-apex-pdp | auto.commit.interval.ms = 5000 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.login.callback.handler.class = null 12:42:04 zookeeper | [2024-04-25 12:39:19,733] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 12:42:04 kafka | log.flush.offset.checkpoint.interval.ms = 60000 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] Server socket created on IP: '0.0.0.0'. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.203631089Z level=info msg="Executing migration" id="Add column preferences.json_data" 12:42:04 policy-apex-pdp | auto.include.jmx.reporter = true 12:42:04 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.login.class = null 12:42:04 zookeeper | [2024-04-25 12:39:19,736] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 12:42:04 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] Server socket created on IP: '::'. 12:42:04 policy-apex-pdp | auto.offset.reset = latest 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.206973343Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.341204ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 policy-pap | sasl.login.connect.timeout.ms = null 12:42:04 zookeeper | [2024-04-25 12:39:19,737] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 12:42:04 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] mariadbd: ready for connections. 12:42:04 policy-apex-pdp | bootstrap.servers = [kafka:9092] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.216383938Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.login.read.timeout.ms = null 12:42:04 kafka | log.index.interval.bytes = 4096 12:42:04 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 12:42:04 policy-apex-pdp | check.crcs = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.216574571Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=189.913µs 12:42:04 policy-db-migrator | 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:42:04 kafka | log.index.size.max.bytes = 10485760 12:42:04 mariadb | 2024-04-25 12:39:22 0 [Note] InnoDB: Buffer pool(s) load completed at 240425 12:39:22 12:42:04 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.221472885Z level=info msg="Executing migration" id="Add preferences index org_id" 12:42:04 zookeeper | [2024-04-25 12:39:19,737] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 12:42:04 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 12:42:04 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:42:04 kafka | log.local.retention.bytes = -2 12:42:04 mariadb | 2024-04-25 12:39:22 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 12:42:04 policy-apex-pdp | client.id = consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.223332819Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.858815ms 12:42:04 zookeeper | [2024-04-25 12:39:19,745] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.login.refresh.window.factor = 0.8 12:42:04 kafka | log.local.retention.ms = -2 12:42:04 mariadb | 2024-04-25 12:39:22 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 12:42:04 policy-apex-pdp | client.rack = 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.229032705Z level=info msg="Executing migration" id="Add preferences index user_id" 12:42:04 zookeeper | [2024-04-25 12:39:19,745] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:42:04 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:42:04 kafka | log.message.downconversion.enable = true 12:42:04 mariadb | 2024-04-25 12:39:22 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.230019528Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=986.293µs 12:42:04 zookeeper | [2024-04-25 12:39:19,758] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 12:42:04 policy-apex-pdp | connections.max.idle.ms = 540000 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.login.retry.backoff.ms = 100 12:42:04 kafka | log.message.format.version = 3.0-IV1 12:42:04 mariadb | 2024-04-25 12:39:22 14 [Warning] Aborted connection 14 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.237086862Z level=info msg="Executing migration" id="create alert table v1" 12:42:04 zookeeper | [2024-04-25 12:39:19,759] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 12:42:04 policy-apex-pdp | default.api.timeout.ms = 60000 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.mechanism = GSSAPI 12:42:04 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.238957117Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.869416ms 12:42:04 zookeeper | [2024-04-25 12:39:21,930] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 12:42:04 policy-apex-pdp | enable.auto.commit = true 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.24528033Z level=info msg="Executing migration" id="add index alert org_id & id " 12:42:04 policy-apex-pdp | exclude.internal.topics = true 12:42:04 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 12:42:04 policy-pap | sasl.oauthbearer.expected.audience = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.246318263Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.037523ms 12:42:04 policy-apex-pdp | fetch.max.bytes = 52428800 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 12:42:04 policy-pap | sasl.oauthbearer.expected.issuer = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.256421987Z level=info msg="Executing migration" id="add index alert state" 12:42:04 policy-apex-pdp | fetch.max.wait.ms = 500 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 kafka | log.message.timestamp.type = CreateTime 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 policy-apex-pdp | fetch.min.bytes = 1 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | log.preallocate = false 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.257867546Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.445279ms 12:42:04 policy-apex-pdp | group.id = 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 12:42:04 policy-db-migrator | 12:42:04 kafka | log.retention.bytes = -1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.26197721Z level=info msg="Executing migration" id="add index alert dashboard_id" 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 policy-apex-pdp | group.instance.id = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.263262217Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.291247ms 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 policy-apex-pdp | heartbeat.interval.ms = 3000 12:42:04 kafka | log.retention.check.interval.ms = 300000 12:42:04 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 policy-apex-pdp | interceptor.classes = [] 12:42:04 kafka | log.retention.hours = 168 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.272167045Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:42:04 policy-apex-pdp | internal.leave.group.on.close = true 12:42:04 kafka | log.retention.minutes = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.273414442Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.246517ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:42:04 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 12:42:04 kafka | log.retention.ms = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.279298349Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:42:04 kafka | log.roll.hours = 168 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.280953232Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.653983ms 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | isolation.level = read_uncommitted 12:42:04 kafka | log.roll.jitter.hours = 0 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.286534325Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | security.protocol = PLAINTEXT 12:42:04 kafka | log.roll.jitter.ms = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.288015754Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.481979ms 12:42:04 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 12:42:04 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-pap | security.providers = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.2929523Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | max.partition.fetch.bytes = 1048576 12:42:04 kafka | log.roll.ms = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.303614361Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.661291ms 12:42:04 kafka | log.segment.bytes = 1073741824 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 policy-pap | send.buffer.bytes = 131072 12:42:04 policy-apex-pdp | max.poll.interval.ms = 300000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.307270099Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 12:42:04 kafka | log.segment.delete.delay.ms = 60000 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | session.timeout.ms = 45000 12:42:04 policy-apex-pdp | max.poll.records = 500 12:42:04 policy-apex-pdp | metadata.max.age.ms = 300000 12:42:04 kafka | max.connection.creation.rate = 2147483647 12:42:04 policy-db-migrator | 12:42:04 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:42:04 policy-apex-pdp | metric.reporters = [] 12:42:04 policy-apex-pdp | metrics.num.samples = 2 12:42:04 kafka | max.connections = 2147483647 12:42:04 policy-db-migrator | 12:42:04 policy-pap | socket.connection.setup.timeout.ms = 10000 12:42:04 policy-apex-pdp | metrics.recording.level = INFO 12:42:04 policy-apex-pdp | metrics.sample.window.ms = 30000 12:42:04 kafka | max.connections.per.ip = 2147483647 12:42:04 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 12:42:04 policy-pap | ssl.cipher.suites = null 12:42:04 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:42:04 policy-apex-pdp | receive.buffer.bytes = 65536 12:42:04 kafka | max.connections.per.ip.overrides = 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.307842247Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=572.097µs 12:42:04 policy-apex-pdp | reconnect.backoff.max.ms = 1000 12:42:04 kafka | max.incremental.fetch.session.cache.slots = 1000 12:42:04 policy-pap | ssl.endpoint.identification.algorithm = https 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.314458014Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 12:42:04 policy-apex-pdp | reconnect.backoff.ms = 50 12:42:04 kafka | message.max.bytes = 1048588 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 policy-apex-pdp | request.timeout.ms = 30000 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | ssl.engine.factory.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.316005184Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.54653ms 12:42:04 kafka | metadata.log.dir = null 12:42:04 policy-apex-pdp | retry.backoff.ms = 100 12:42:04 policy-pap | ssl.key.password = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.320391163Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 12:42:04 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 12:42:04 policy-apex-pdp | sasl.client.callback.handler.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.321112142Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=720.059µs 12:42:04 kafka | metadata.log.max.snapshot.interval.ms = 3600000 12:42:04 policy-db-migrator | 12:42:04 policy-pap | ssl.keymanager.algorithm = SunX509 12:42:04 policy-apex-pdp | sasl.jaas.config = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.324949662Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 12:42:04 kafka | metadata.log.segment.bytes = 1073741824 12:42:04 policy-db-migrator | 12:42:04 policy-pap | ssl.keystore.certificate.chain = null 12:42:04 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.32557583Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=625.438µs 12:42:04 kafka | metadata.log.segment.min.bytes = 8388608 12:42:04 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 12:42:04 policy-pap | ssl.keystore.key = null 12:42:04 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.332953328Z level=info msg="Executing migration" id="create alert_notification table v1" 12:42:04 kafka | metadata.log.segment.ms = 604800000 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | ssl.keystore.location = null 12:42:04 policy-apex-pdp | sasl.kerberos.service.name = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.334374398Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.42077ms 12:42:04 kafka | metadata.max.idle.interval.ms = 500 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 policy-pap | ssl.keystore.password = null 12:42:04 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.386527316Z level=info msg="Executing migration" id="Add column is_default" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.39288548Z level=info msg="Migration successfully executed" id="Add column is_default" duration=6.359514ms 12:42:04 kafka | metadata.max.retention.bytes = 104857600 12:42:04 policy-pap | ssl.keystore.type = JKS 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | sasl.login.callback.handler.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.402957763Z level=info msg="Executing migration" id="Add column frequency" 12:42:04 kafka | metadata.max.retention.ms = 604800000 12:42:04 policy-pap | ssl.protocol = TLSv1.3 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | sasl.login.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.4095847Z level=info msg="Migration successfully executed" id="Add column frequency" duration=6.625827ms 12:42:04 kafka | metric.reporters = [] 12:42:04 policy-pap | ssl.provider = null 12:42:04 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 12:42:04 policy-apex-pdp | sasl.login.connect.timeout.ms = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.517540108Z level=info msg="Executing migration" id="Add column send_reminder" 12:42:04 kafka | metrics.num.samples = 2 12:42:04 policy-pap | ssl.secure.random.implementation = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | sasl.login.read.timeout.ms = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.524852985Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=7.313617ms 12:42:04 kafka | metrics.recording.level = INFO 12:42:04 policy-pap | ssl.trustmanager.algorithm = PKIX 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 12:42:04 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.53431821Z level=info msg="Executing migration" id="Add column disable_resolve_message" 12:42:04 kafka | metrics.sample.window.ms = 30000 12:42:04 policy-pap | ssl.truststore.certificates = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.539939164Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=5.620014ms 12:42:04 kafka | min.insync.replicas = 1 12:42:04 policy-pap | ssl.truststore.location = null 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.54948165Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 12:42:04 kafka | node.id = 1 12:42:04 policy-pap | ssl.truststore.password = null 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.551321984Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.838444ms 12:42:04 kafka | num.io.threads = 8 12:42:04 policy-pap | ssl.truststore.type = JKS 12:42:04 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 12:42:04 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.558390938Z level=info msg="Executing migration" id="Update alert table charset" 12:42:04 kafka | num.network.threads = 3 12:42:04 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.55858247Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=192.772µs 12:42:04 kafka | num.partitions = 1 12:42:04 policy-pap | 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.568032505Z level=info msg="Executing migration" id="Update alert_notification table charset" 12:42:04 kafka | num.recovery.threads.per.data.dir = 1 12:42:04 policy-pap | [2024-04-25T12:39:59.207+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 policy-apex-pdp | sasl.mechanism = GSSAPI 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.568085546Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=58.141µs 12:42:04 kafka | num.replica.alter.log.dirs.threads = null 12:42:04 policy-pap | [2024-04-25T12:39:59.207+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.571754374Z level=info msg="Executing migration" id="create notification_journal table v1" 12:42:04 kafka | num.replica.fetchers = 1 12:42:04 policy-pap | [2024-04-25T12:39:59.207+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048799206 12:42:04 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.572738378Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=980.714µs 12:42:04 kafka | offset.metadata.max.bytes = 4096 12:42:04 policy-pap | [2024-04-25T12:39:59.210+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-1, groupId=53d3b957-3026-4843-bc4f-55d426241089] Subscribed to topic(s): policy-pdp-pap 12:42:04 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 12:42:04 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.582972692Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 12:42:04 kafka | offsets.commit.required.acks = -1 12:42:04 policy-pap | [2024-04-25T12:39:59.211+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.584451192Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.47802ms 12:42:04 kafka | offsets.commit.timeout.ms = 5000 12:42:04 policy-pap | allow.auto.create.topics = true 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 kafka | offsets.load.buffer.size = 5242880 12:42:04 policy-pap | auto.commit.interval.ms = 5000 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.597286831Z level=info msg="Executing migration" id="drop alert_notification_journal" 12:42:04 kafka | offsets.retention.check.interval.ms = 600000 12:42:04 policy-pap | auto.include.jmx.reporter = true 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 policy-db-migrator | 12:42:04 kafka | offsets.retention.minutes = 10080 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.598471278Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.182136ms 12:42:04 policy-pap | auto.offset.reset = latest 12:42:04 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 12:42:04 policy-db-migrator | 12:42:04 kafka | offsets.topic.compression.codec = 0 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.605777195Z level=info msg="Executing migration" id="create alert_notification_state table v1" 12:42:04 policy-pap | bootstrap.servers = [kafka:9092] 12:42:04 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 12:42:04 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 12:42:04 kafka | offsets.topic.num.partitions = 50 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.607344865Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.56678ms 12:42:04 policy-pap | check.crcs = true 12:42:04 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | offsets.topic.replication.factor = 1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.615678385Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 12:42:04 policy-pap | client.dns.lookup = use_all_dns_ips 12:42:04 policy-apex-pdp | security.protocol = PLAINTEXT 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 12:42:04 kafka | offsets.topic.segment.bytes = 104857600 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.617083224Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.404309ms 12:42:04 policy-pap | client.id = consumer-policy-pap-2 12:42:04 policy-apex-pdp | security.providers = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.621210988Z level=info msg="Executing migration" id="Add for to alert table" 12:42:04 policy-pap | client.rack = 12:42:04 policy-apex-pdp | send.buffer.bytes = 131072 12:42:04 kafka | password.encoder.iterations = 4096 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.626068132Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.857894ms 12:42:04 policy-pap | connections.max.idle.ms = 540000 12:42:04 policy-apex-pdp | session.timeout.ms = 45000 12:42:04 policy-db-migrator | 12:42:04 kafka | password.encoder.key.length = 128 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.634257271Z level=info msg="Executing migration" id="Add column uid in alert_notification" 12:42:04 policy-pap | default.api.timeout.ms = 60000 12:42:04 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 12:42:04 policy-db-migrator | 12:42:04 kafka | password.encoder.keyfactory.algorithm = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.63798223Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.727429ms 12:42:04 policy-pap | enable.auto.commit = true 12:42:04 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 12:42:04 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 12:42:04 kafka | password.encoder.old.secret = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.64101988Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 12:42:04 policy-pap | exclude.internal.topics = true 12:42:04 policy-apex-pdp | ssl.cipher.suites = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | password.encoder.secret = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.641201302Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=181.322µs 12:42:04 policy-pap | fetch.max.bytes = 52428800 12:42:04 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 12:42:04 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.644726748Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 12:42:04 policy-pap | fetch.max.wait.ms = 500 12:42:04 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | process.roles = [] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.645802053Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.073795ms 12:42:04 policy-pap | fetch.min.bytes = 1 12:42:04 policy-apex-pdp | ssl.engine.factory.class = null 12:42:04 policy-db-migrator | 12:42:04 kafka | producer.id.expiration.check.interval.ms = 600000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.649490671Z level=info msg="Executing migration" id="Remove unique index org_id_name" 12:42:04 policy-pap | group.id = policy-pap 12:42:04 policy-apex-pdp | ssl.key.password = null 12:42:04 policy-db-migrator | 12:42:04 kafka | producer.id.expiration.ms = 86400000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.650584507Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.091955ms 12:42:04 policy-pap | group.instance.id = null 12:42:04 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 12:42:04 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 12:42:04 kafka | producer.purgatory.purge.interval.requests = 1000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.661045434Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 12:42:04 policy-pap | heartbeat.interval.ms = 3000 12:42:04 policy-apex-pdp | ssl.keystore.certificate.chain = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | queued.max.request.bytes = -1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.667095924Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.05077ms 12:42:04 policy-pap | interceptor.classes = [] 12:42:04 policy-apex-pdp | ssl.keystore.key = null 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 12:42:04 kafka | queued.max.requests = 500 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.670998966Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 12:42:04 policy-pap | internal.leave.group.on.close = true 12:42:04 policy-apex-pdp | ssl.keystore.location = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | quota.window.num = 11 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.671063737Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=65.211µs 12:42:04 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 12:42:04 policy-apex-pdp | ssl.keystore.password = null 12:42:04 policy-db-migrator | 12:42:04 kafka | quota.window.size.seconds = 1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.675059659Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 12:42:04 policy-apex-pdp | ssl.keystore.type = JKS 12:42:04 policy-db-migrator | 12:42:04 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.676134134Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.070775ms 12:42:04 policy-pap | isolation.level = read_uncommitted 12:42:04 policy-apex-pdp | ssl.protocol = TLSv1.3 12:42:04 kafka | remote.log.manager.task.interval.ms = 30000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.684444414Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 12:42:04 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-apex-pdp | ssl.provider = null 12:42:04 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 12:42:04 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.685927163Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.482669ms 12:42:04 policy-pap | max.partition.fetch.bytes = 1048576 12:42:04 policy-apex-pdp | ssl.secure.random.implementation = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | remote.log.manager.task.retry.backoff.ms = 500 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.6909449Z level=info msg="Executing migration" id="Drop old annotation table v4" 12:42:04 policy-pap | max.poll.interval.ms = 300000 12:42:04 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 kafka | remote.log.manager.task.retry.jitter = 0.2 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.691082171Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=138.281µs 12:42:04 policy-pap | max.poll.records = 500 12:42:04 policy-apex-pdp | ssl.truststore.certificates = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | remote.log.manager.thread.pool.size = 10 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.696538194Z level=info msg="Executing migration" id="create annotation table v5" 12:42:04 policy-pap | metadata.max.age.ms = 300000 12:42:04 policy-apex-pdp | ssl.truststore.location = null 12:42:04 policy-db-migrator | 12:42:04 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.697427795Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=889.501µs 12:42:04 policy-pap | metric.reporters = [] 12:42:04 policy-apex-pdp | ssl.truststore.password = null 12:42:04 policy-db-migrator | 12:42:04 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.703521976Z level=info msg="Executing migration" id="add index annotation 0 v3" 12:42:04 policy-pap | metrics.num.samples = 2 12:42:04 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 12:42:04 kafka | remote.log.metadata.manager.class.path = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.704958605Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.439359ms 12:42:04 policy-apex-pdp | ssl.truststore.type = JKS 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.708941337Z level=info msg="Executing migration" id="add index annotation 1 v3" 12:42:04 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-pap | metrics.recording.level = INFO 12:42:04 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.710406477Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.45756ms 12:42:04 policy-apex-pdp | 12:42:04 policy-pap | metrics.sample.window.ms = 30000 12:42:04 kafka | remote.log.metadata.manager.listener.name = null 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.718382802Z level=info msg="Executing migration" id="add index annotation 2 v3" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.798+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:42:04 kafka | remote.log.reader.max.pending.tasks = 100 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.798+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 policy-pap | receive.buffer.bytes = 65536 12:42:04 kafka | remote.log.reader.threads = 10 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.719195893Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=813.051µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.798+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048802798 12:42:04 policy-pap | reconnect.backoff.max.ms = 1000 12:42:04 kafka | remote.log.storage.manager.class.name = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.729372438Z level=info msg="Executing migration" id="add index annotation 3 v3" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.798+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Subscribed to topic(s): policy-pdp-pap 12:42:04 policy-pap | reconnect.backoff.ms = 50 12:42:04 kafka | remote.log.storage.manager.class.path = null 12:42:04 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.730802536Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.429408ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.799+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=130d2ddf-3838-4a13-ace3-2e823e62f537, alive=false, publisher=null]]: starting 12:42:04 policy-pap | request.timeout.ms = 30000 12:42:04 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.735004612Z level=info msg="Executing migration" id="add index annotation 4 v3" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.811+00:00|INFO|ProducerConfig|main] ProducerConfig values: 12:42:04 policy-pap | retry.backoff.ms = 100 12:42:04 kafka | remote.log.storage.system.enable = false 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.736287929Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.284107ms 12:42:04 policy-apex-pdp | acks = -1 12:42:04 policy-pap | sasl.client.callback.handler.class = null 12:42:04 kafka | replica.fetch.backoff.ms = 1000 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.741856242Z level=info msg="Executing migration" id="Update annotation table charset" 12:42:04 policy-apex-pdp | auto.include.jmx.reporter = true 12:42:04 policy-pap | sasl.jaas.config = null 12:42:04 kafka | replica.fetch.max.bytes = 1048576 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.741885433Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=30.621µs 12:42:04 policy-apex-pdp | batch.size = 16384 12:42:04 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 kafka | replica.fetch.min.bytes = 1 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.746891679Z level=info msg="Executing migration" id="Add column region_id to annotation table" 12:42:04 policy-apex-pdp | bootstrap.servers = [kafka:9092] 12:42:04 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 kafka | replica.fetch.response.max.bytes = 10485760 12:42:04 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.751025693Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.133604ms 12:42:04 policy-apex-pdp | buffer.memory = 33554432 12:42:04 policy-pap | sasl.kerberos.service.name = null 12:42:04 kafka | replica.fetch.wait.max.ms = 500 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.754588631Z level=info msg="Executing migration" id="Drop category_id index" 12:42:04 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 12:42:04 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.755411212Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=822.541µs 12:42:04 policy-apex-pdp | client.id = producer-1 12:42:04 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.761024186Z level=info msg="Executing migration" id="Add column tags to annotation table" 12:42:04 kafka | replica.lag.time.max.ms = 30000 12:42:04 policy-apex-pdp | compression.type = none 12:42:04 policy-pap | sasl.login.callback.handler.class = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.767224027Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.198341ms 12:42:04 kafka | replica.selector.class = null 12:42:04 policy-apex-pdp | connections.max.idle.ms = 540000 12:42:04 policy-pap | sasl.login.class = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.774658406Z level=info msg="Executing migration" id="Create annotation_tag table v2" 12:42:04 kafka | replica.socket.receive.buffer.bytes = 65536 12:42:04 policy-apex-pdp | delivery.timeout.ms = 120000 12:42:04 policy-pap | sasl.login.connect.timeout.ms = null 12:42:04 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.775345715Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=687.819µs 12:42:04 kafka | replica.socket.timeout.ms = 30000 12:42:04 policy-apex-pdp | enable.idempotence = true 12:42:04 policy-pap | sasl.login.read.timeout.ms = null 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.778629578Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 12:42:04 kafka | replication.quota.window.num = 11 12:42:04 policy-apex-pdp | interceptor.classes = [] 12:42:04 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.779529941Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=898.273µs 12:42:04 kafka | replication.quota.window.size.seconds = 1 12:42:04 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:42:04 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.784467285Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 12:42:04 kafka | request.timeout.ms = 30000 12:42:04 policy-apex-pdp | linger.ms = 0 12:42:04 policy-db-migrator | 12:42:04 kafka | reserved.broker.max.id = 1000 12:42:04 policy-apex-pdp | max.block.ms = 60000 12:42:04 policy-pap | sasl.login.refresh.window.factor = 0.8 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.785266927Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=796.572µs 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.client.callback.handler.class = null 12:42:04 policy-apex-pdp | max.in.flight.requests.per.connection = 5 12:42:04 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:42:04 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.7916093Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 12:42:04 kafka | sasl.enabled.mechanisms = [GSSAPI] 12:42:04 policy-apex-pdp | max.request.size = 1048576 12:42:04 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.806821801Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.213731ms 12:42:04 policy-apex-pdp | metadata.max.age.ms = 300000 12:42:04 policy-pap | sasl.login.retry.backoff.ms = 100 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 12:42:04 kafka | sasl.jaas.config = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.810598491Z level=info msg="Executing migration" id="Create annotation_tag table v3" 12:42:04 policy-apex-pdp | metadata.max.idle.ms = 300000 12:42:04 policy-pap | sasl.mechanism = GSSAPI 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.811119238Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=520.717µs 12:42:04 policy-apex-pdp | metric.reporters = [] 12:42:04 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.817718045Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 12:42:04 policy-apex-pdp | metrics.num.samples = 2 12:42:04 policy-pap | sasl.oauthbearer.expected.audience = null 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.819548579Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.832524ms 12:42:04 policy-apex-pdp | metrics.recording.level = INFO 12:42:04 policy-pap | sasl.oauthbearer.expected.issuer = null 12:42:04 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 12:42:04 kafka | sasl.kerberos.service.name = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.825266255Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 12:42:04 policy-apex-pdp | metrics.sample.window.ms = 30000 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.825724371Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=459.786µs 12:42:04 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 12:42:04 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.829230227Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 12:42:04 policy-apex-pdp | partitioner.availability.timeout.ms = 0 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.login.callback.handler.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.829748454Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=520.467µs 12:42:04 policy-apex-pdp | partitioner.class = null 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.login.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.835432878Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 12:42:04 policy-apex-pdp | partitioner.ignore.keys = false 12:42:04 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.login.connect.timeout.ms = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.835741493Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=311.555µs 12:42:04 policy-apex-pdp | receive.buffer.bytes = 32768 12:42:04 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:42:04 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 12:42:04 kafka | sasl.login.read.timeout.ms = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.841003352Z level=info msg="Executing migration" id="Add created time to annotation table" 12:42:04 policy-apex-pdp | reconnect.backoff.max.ms = 1000 12:42:04 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.login.refresh.buffer.seconds = 300 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:25.845597683Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.593041ms 12:42:04 policy-apex-pdp | reconnect.backoff.ms = 50 12:42:04 policy-pap | security.protocol = PLAINTEXT 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 12:42:04 kafka | sasl.login.refresh.min.period.seconds = 60 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.067271752Z level=info msg="Executing migration" id="Add updated time to annotation table" 12:42:04 policy-apex-pdp | request.timeout.ms = 30000 12:42:04 policy-pap | security.providers = null 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.074875613Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=7.606021ms 12:42:04 policy-apex-pdp | retries = 2147483647 12:42:04 policy-pap | send.buffer.bytes = 131072 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.login.refresh.window.factor = 0.8 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.080688249Z level=info msg="Executing migration" id="Add index for created in annotation table" 12:42:04 policy-apex-pdp | retry.backoff.ms = 100 12:42:04 policy-pap | session.timeout.ms = 45000 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.login.refresh.window.jitter = 0.05 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.081417449Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=729.15µs 12:42:04 policy-apex-pdp | sasl.client.callback.handler.class = null 12:42:04 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:42:04 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 12:42:04 kafka | sasl.login.retry.backoff.max.ms = 10000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.090588691Z level=info msg="Executing migration" id="Add index for updated in annotation table" 12:42:04 policy-apex-pdp | sasl.jaas.config = null 12:42:04 policy-pap | socket.connection.setup.timeout.ms = 10000 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.login.retry.backoff.ms = 100 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.092508236Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.917994ms 12:42:04 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 policy-pap | ssl.cipher.suites = null 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 kafka | sasl.mechanism.controller.protocol = GSSAPI 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.096548499Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 12:42:04 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.097015575Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=466.246µs 12:42:04 policy-apex-pdp | sasl.kerberos.service.name = null 12:42:04 policy-pap | ssl.endpoint.identification.algorithm = https 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.102528198Z level=info msg="Executing migration" id="Add epoch_end column" 12:42:04 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 policy-pap | ssl.engine.factory.class = null 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.oauthbearer.expected.audience = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.110116059Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=7.587041ms 12:42:04 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 kafka | sasl.oauthbearer.expected.issuer = null 12:42:04 policy-apex-pdp | sasl.login.callback.handler.class = null 12:42:04 policy-pap | ssl.key.password = null 12:42:04 policy-db-migrator | > upgrade 0450-pdpgroup.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.115871774Z level=info msg="Executing migration" id="Add index for epoch_end" 12:42:04 policy-apex-pdp | sasl.login.class = null 12:42:04 policy-pap | ssl.keymanager.algorithm = SunX509 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.116894538Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.024154ms 12:42:04 policy-apex-pdp | sasl.login.connect.timeout.ms = null 12:42:04 policy-pap | ssl.keystore.certificate.chain = null 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 12:42:04 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 policy-apex-pdp | sasl.login.read.timeout.ms = null 12:42:04 policy-pap | ssl.keystore.key = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.122724475Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 12:42:04 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 12:42:04 policy-pap | ssl.keystore.location = null 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.123020278Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=295.423µs 12:42:04 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 12:42:04 policy-pap | ssl.keystore.password = null 12:42:04 policy-db-migrator | 12:42:04 kafka | sasl.oauthbearer.scope.claim.name = scope 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.127171404Z level=info msg="Executing migration" id="Move region to single row" 12:42:04 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 12:42:04 policy-pap | ssl.keystore.type = JKS 12:42:04 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 12:42:04 kafka | sasl.oauthbearer.sub.claim.name = sub 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.127749421Z level=info msg="Migration successfully executed" id="Move region to single row" duration=577.117µs 12:42:04 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 12:42:04 policy-pap | ssl.protocol = TLSv1.3 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.oauthbearer.token.endpoint.url = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.132975Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 12:42:04 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 12:42:04 policy-pap | ssl.provider = null 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 kafka | sasl.server.callback.handler.class = null 12:42:04 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 12:42:04 policy-pap | ssl.secure.random.implementation = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | sasl.server.max.receive.size = 524288 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.134369528Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.394108ms 12:42:04 policy-apex-pdp | sasl.mechanism = GSSAPI 12:42:04 policy-pap | ssl.trustmanager.algorithm = PKIX 12:42:04 policy-db-migrator | 12:42:04 kafka | security.inter.broker.protocol = PLAINTEXT 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.140101294Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 12:42:04 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 policy-pap | ssl.truststore.certificates = null 12:42:04 policy-db-migrator | 12:42:04 kafka | security.providers = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.141019527Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=916.083µs 12:42:04 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 12:42:04 policy-pap | ssl.truststore.location = null 12:42:04 policy-db-migrator | > upgrade 0470-pdp.sql 12:42:04 kafka | server.max.startup.time.ms = 9223372036854775807 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.146837284Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 12:42:04 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 12:42:04 policy-pap | ssl.truststore.password = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | socket.connection.setup.timeout.max.ms = 30000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.148169701Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.330027ms 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 policy-pap | ssl.truststore.type = JKS 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 kafka | socket.connection.setup.timeout.ms = 10000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.155284405Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | socket.listen.backlog.size = 50 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.156583972Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.302147ms 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 policy-pap | 12:42:04 policy-db-migrator | 12:42:04 kafka | socket.receive.buffer.bytes = 102400 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.161328555Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 12:42:04 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 policy-pap | [2024-04-25T12:39:59.216+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 policy-db-migrator | 12:42:04 kafka | socket.request.max.bytes = 104857600 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.162007073Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=679.768µs 12:42:04 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 12:42:04 policy-pap | [2024-04-25T12:39:59.216+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 12:42:04 kafka | socket.send.buffer.bytes = 102400 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.167315143Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 12:42:04 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 12:42:04 policy-pap | [2024-04-25T12:39:59.216+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048799216 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | ssl.cipher.suites = [] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.168360547Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=994.343µs 12:42:04 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 12:42:04 policy-pap | [2024-04-25T12:39:59.217+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 12:42:04 kafka | ssl.client.auth = none 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.175903537Z level=info msg="Executing migration" id="Increase tags column to length 4096" 12:42:04 policy-pap | [2024-04-25T12:39:59.710+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 12:42:04 policy-apex-pdp | security.protocol = PLAINTEXT 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.176033179Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=131.392µs 12:42:04 policy-apex-pdp | security.providers = null 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:39:59.856+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 12:42:04 kafka | ssl.endpoint.identification.algorithm = https 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.181939316Z level=info msg="Executing migration" id="create test_data table" 12:42:04 policy-apex-pdp | send.buffer.bytes = 131072 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:00.107+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@40db6136, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5ced0537, org.springframework.security.web.context.SecurityContextHolderFilter@50e24ea4, org.springframework.security.web.header.HeaderWriterFilter@3605ab16, org.springframework.security.web.authentication.logout.LogoutFilter@2befb16f, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@78ea700f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@22172b00, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@4205d5d0, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6ee1ddcf, org.springframework.security.web.access.ExceptionTranslationFilter@2e7517aa, org.springframework.security.web.access.intercept.AuthorizationFilter@23d23d98] 12:42:04 kafka | ssl.engine.factory.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.183255764Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.315418ms 12:42:04 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 12:42:04 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 12:42:04 policy-pap | [2024-04-25T12:40:00.908+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 12:42:04 kafka | ssl.key.password = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.194614774Z level=info msg="Executing migration" id="create dashboard_version table v1" 12:42:04 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:01.000+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 12:42:04 kafka | ssl.keymanager.algorithm = SunX509 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.195480265Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=864.791µs 12:42:04 policy-apex-pdp | ssl.cipher.suites = null 12:42:04 policy-pap | [2024-04-25T12:40:01.013+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 kafka | ssl.keystore.certificate.chain = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.208839202Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 12:42:04 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 policy-pap | [2024-04-25T12:40:01.029+00:00|INFO|ServiceManager|main] Policy PAP starting 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | ssl.keystore.key = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.210326851Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.495739ms 12:42:04 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 12:42:04 policy-pap | [2024-04-25T12:40:01.029+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 12:42:04 policy-db-migrator | 12:42:04 kafka | ssl.keystore.location = null 12:42:04 policy-apex-pdp | ssl.engine.factory.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.214418106Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 12:42:04 policy-pap | [2024-04-25T12:40:01.030+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 12:42:04 policy-db-migrator | 12:42:04 kafka | ssl.keystore.password = null 12:42:04 policy-apex-pdp | ssl.key.password = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.21556163Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.143064ms 12:42:04 policy-pap | [2024-04-25T12:40:01.031+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 12:42:04 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 12:42:04 kafka | ssl.keystore.type = JKS 12:42:04 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.222060777Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 12:42:04 policy-pap | [2024-04-25T12:40:01.031+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | ssl.principal.mapping.rules = DEFAULT 12:42:04 policy-apex-pdp | ssl.keystore.certificate.chain = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.222252889Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=190.482µs 12:42:04 policy-pap | [2024-04-25T12:40:01.031+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 kafka | ssl.protocol = TLSv1.3 12:42:04 policy-apex-pdp | ssl.keystore.key = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.230079912Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 12:42:04 policy-pap | [2024-04-25T12:40:01.031+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | ssl.provider = null 12:42:04 policy-apex-pdp | ssl.keystore.location = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.230675221Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=595.289µs 12:42:04 policy-pap | [2024-04-25T12:40:01.033+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53d3b957-3026-4843-bc4f-55d426241089, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@15a8bbe5 12:42:04 policy-db-migrator | 12:42:04 kafka | ssl.secure.random.implementation = null 12:42:04 policy-apex-pdp | ssl.keystore.password = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.237668502Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 12:42:04 policy-pap | [2024-04-25T12:40:01.046+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53d3b957-3026-4843-bc4f-55d426241089, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 12:42:04 policy-db-migrator | 12:42:04 kafka | ssl.trustmanager.algorithm = PKIX 12:42:04 policy-apex-pdp | ssl.keystore.type = JKS 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.237772944Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=105.692µs 12:42:04 policy-pap | [2024-04-25T12:40:01.047+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:42:04 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 12:42:04 kafka | ssl.truststore.certificates = null 12:42:04 policy-apex-pdp | ssl.protocol = TLSv1.3 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.244454552Z level=info msg="Executing migration" id="create team table" 12:42:04 policy-pap | allow.auto.create.topics = true 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | ssl.truststore.location = null 12:42:04 policy-apex-pdp | ssl.provider = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.245773189Z level=info msg="Migration successfully executed" id="create team table" duration=1.316477ms 12:42:04 policy-pap | auto.commit.interval.ms = 5000 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 12:42:04 kafka | ssl.truststore.password = null 12:42:04 policy-apex-pdp | ssl.secure.random.implementation = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.249710251Z level=info msg="Executing migration" id="add index team.org_id" 12:42:04 policy-pap | auto.include.jmx.reporter = true 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | ssl.truststore.type = JKS 12:42:04 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.251308873Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.605142ms 12:42:04 policy-pap | auto.offset.reset = latest 12:42:04 policy-db-migrator | 12:42:04 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 12:42:04 policy-apex-pdp | ssl.truststore.certificates = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.257571915Z level=info msg="Executing migration" id="add unique index team_org_id_name" 12:42:04 policy-pap | bootstrap.servers = [kafka:9092] 12:42:04 policy-db-migrator | 12:42:04 kafka | transaction.max.timeout.ms = 900000 12:42:04 policy-apex-pdp | ssl.truststore.location = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.258534288Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=960.003µs 12:42:04 policy-pap | check.crcs = true 12:42:04 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 12:42:04 kafka | transaction.partition.verification.enable = true 12:42:04 policy-apex-pdp | ssl.truststore.password = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.269486402Z level=info msg="Executing migration" id="Add column uid in team" 12:42:04 policy-pap | client.dns.lookup = use_all_dns_ips 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 12:42:04 policy-apex-pdp | ssl.truststore.type = JKS 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.278565223Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=9.076811ms 12:42:04 policy-pap | client.id = consumer-53d3b957-3026-4843-bc4f-55d426241089-3 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 12:42:04 kafka | transaction.state.log.load.buffer.size = 5242880 12:42:04 policy-apex-pdp | transaction.timeout.ms = 60000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.282406043Z level=info msg="Executing migration" id="Update uid column values in team" 12:42:04 policy-pap | client.rack = 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | transaction.state.log.min.isr = 2 12:42:04 policy-apex-pdp | transactional.id = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.282648066Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=241.673µs 12:42:04 policy-pap | connections.max.idle.ms = 540000 12:42:04 policy-db-migrator | 12:42:04 kafka | transaction.state.log.num.partitions = 50 12:42:04 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.288836238Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 12:42:04 policy-pap | default.api.timeout.ms = 60000 12:42:04 policy-db-migrator | 12:42:04 kafka | transaction.state.log.replication.factor = 3 12:42:04 policy-apex-pdp | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.289895812Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.059004ms 12:42:04 policy-pap | enable.auto.commit = true 12:42:04 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 12:42:04 kafka | transaction.state.log.segment.bytes = 104857600 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.822+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.293877964Z level=info msg="Executing migration" id="create team member table" 12:42:04 policy-pap | exclude.internal.topics = true 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | transactional.id.expiration.ms = 604800000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.838+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.295327754Z level=info msg="Migration successfully executed" id="create team member table" duration=1.44868ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 kafka | unclean.leader.election.enable = false 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.838+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.302117053Z level=info msg="Executing migration" id="add index team_member.org_id" 12:42:04 policy-pap | fetch.max.bytes = 52428800 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | unstable.api.versions.enable = false 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.838+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048802838 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.303133937Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.015983ms 12:42:04 policy-pap | fetch.max.wait.ms = 500 12:42:04 policy-db-migrator | 12:42:04 kafka | zookeeper.clientCnxnSocket = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.838+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=130d2ddf-3838-4a13-ace3-2e823e62f537, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.350560603Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 12:42:04 policy-pap | fetch.min.bytes = 1 12:42:04 policy-db-migrator | 12:42:04 kafka | zookeeper.connect = zookeeper:2181 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.839+00:00|INFO|ServiceManager|main] service manager starting set alive 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.352740662Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.180729ms 12:42:04 policy-pap | group.id = 53d3b957-3026-4843-bc4f-55d426241089 12:42:04 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 12:42:04 kafka | zookeeper.connection.timeout.ms = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.839+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.36323211Z level=info msg="Executing migration" id="add index team_member.team_id" 12:42:04 policy-pap | group.instance.id = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | zookeeper.max.in.flight.requests = 10 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.840+00:00|INFO|ServiceManager|main] service manager starting topic sinks 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.364251713Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.019053ms 12:42:04 policy-pap | heartbeat.interval.ms = 3000 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 12:42:04 kafka | zookeeper.metadata.migration.enable = false 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.840+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.368766274Z level=info msg="Executing migration" id="Add column email to team table" 12:42:04 policy-pap | interceptor.classes = [] 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | zookeeper.metadata.migration.min.batch.size = 200 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.842+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.377296166Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.530462ms 12:42:04 policy-pap | internal.leave.group.on.close = true 12:42:04 policy-db-migrator | 12:42:04 kafka | zookeeper.session.timeout.ms = 18000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.842+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.383598709Z level=info msg="Executing migration" id="Add column external to team_member table" 12:42:04 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 12:42:04 policy-db-migrator | 12:42:04 kafka | zookeeper.set.acl = false 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.842+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.388283061Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.680882ms 12:42:04 policy-pap | isolation.level = read_uncommitted 12:42:04 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 12:42:04 kafka | zookeeper.ssl.cipher.suites = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.842+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4b79aeb3-604a-4e33-80d9-cdeedf19ce63, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@607fbe09 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.393019434Z level=info msg="Executing migration" id="Add column permission to team_member table" 12:42:04 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | zookeeper.ssl.client.enable = false 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.843+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4b79aeb3-604a-4e33-80d9-cdeedf19ce63, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.398312923Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.292879ms 12:42:04 policy-pap | max.partition.fetch.bytes = 1048576 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 12:42:04 kafka | zookeeper.ssl.crl.enable = false 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.843+00:00|INFO|ServiceManager|main] service manager starting Create REST server 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.405891944Z level=info msg="Executing migration" id="create dashboard acl table" 12:42:04 policy-pap | max.poll.interval.ms = 300000 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | zookeeper.ssl.enabled.protocols = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.855+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.406885186Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=992.622µs 12:42:04 policy-pap | max.poll.records = 500 12:42:04 policy-db-migrator | 12:42:04 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 12:42:04 policy-apex-pdp | [] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.412307538Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 12:42:04 policy-pap | metadata.max.age.ms = 300000 12:42:04 policy-db-migrator | 12:42:04 kafka | zookeeper.ssl.keystore.location = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:02.860+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.414074822Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.765894ms 12:42:04 policy-pap | metric.reporters = [] 12:42:04 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 12:42:04 kafka | zookeeper.ssl.keystore.password = null 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a0278ad7-a33f-4693-8b54-fde3c5ffe2e1","timestampMs":1714048802842,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.419596305Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 12:42:04 policy-pap | metrics.num.samples = 2 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | zookeeper.ssl.keystore.type = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.002+00:00|INFO|ServiceManager|main] service manager starting Rest Server 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.42146644Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.869745ms 12:42:04 policy-pap | metrics.recording.level = INFO 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.002+00:00|INFO|ServiceManager|main] service manager starting 12:42:04 policy-pap | metrics.sample.window.ms = 30000 12:42:04 kafka | zookeeper.ssl.ocsp.enable = false 12:42:04 kafka | zookeeper.ssl.protocol = TLSv1.2 12:42:04 kafka | zookeeper.ssl.truststore.location = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.429897681Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.431188688Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.292317ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.002+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 12:42:04 policy-pap | receive.buffer.bytes = 65536 12:42:04 kafka | zookeeper.ssl.truststore.password = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.440578412Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.002+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 policy-pap | reconnect.backoff.max.ms = 1000 12:42:04 kafka | zookeeper.ssl.truststore.type = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.442096192Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.51605ms 12:42:04 policy-db-migrator | > upgrade 0570-toscadatatype.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.011+00:00|INFO|ServiceManager|main] service manager started 12:42:04 policy-pap | reconnect.backoff.ms = 50 12:42:04 kafka | (kafka.server.KafkaConfig) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.450827787Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.011+00:00|INFO|ServiceManager|main] service manager started 12:42:04 policy-pap | request.timeout.ms = 30000 12:42:04 kafka | [2024-04-25 12:39:23,738] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.452483949Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.654292ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.011+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 12:42:04 policy-pap | retry.backoff.ms = 100 12:42:04 kafka | [2024-04-25 12:39:23,738] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.456659494Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.client.callback.handler.class = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.011+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5aabbb29{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@72c927f1{/,null,STOPPED}, connector=RestServerParameters@53ab0286{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-21694e53==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@2326051b{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-46074492==org.glassfish.jersey.servlet.ServletContainer@705041b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.138+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.458158234Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.49823ms 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.jaas.config = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.138+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.139+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Cluster ID: 6HLElDkITkKpDhaqvETosg 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.462629193Z level=info msg="Executing migration" id="add index dashboard_permission" 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.139+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 6HLElDkITkKpDhaqvETosg 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.140+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.463641717Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.012014ms 12:42:04 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 12:42:04 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.242+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.257+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.470448746Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.347+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.360+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.471272177Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=819.26µs 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 12:42:04 policy-pap | sasl.kerberos.service.name = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.449+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,739] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.475516683Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 12:42:04 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.462+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,744] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.475995229Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=478.026µs 12:42:04 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.552+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,770] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.481721065Z level=info msg="Executing migration" id="create tag table" 12:42:04 policy-pap | sasl.login.callback.handler.class = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.571+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,774] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 12:42:04 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.482595217Z level=info msg="Migration successfully executed" id="create tag table" duration=873.612µs 12:42:04 policy-pap | sasl.login.class = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.653+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 12:42:04 kafka | [2024-04-25 12:39:23,782] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.493890606Z level=info msg="Executing migration" id="add index tag.key_value" 12:42:04 policy-pap | sasl.login.connect.timeout.ms = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.653+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 12:42:04 kafka | [2024-04-25 12:39:23,784] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.495636959Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.747743ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.671743614Z level=info msg="Executing migration" id="create login attempt table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.654+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,785] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.login.read.timeout.ms = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.673214884Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.473ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.675+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,794] INFO Starting the log cleaner (kafka.log.LogCleaner) 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.683031333Z level=info msg="Executing migration" id="add index login_attempt.username" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.756+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,836] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.683999676Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=966.883µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.778+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,865] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 12:42:04 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 12:42:04 policy-pap | sasl.login.refresh.window.factor = 0.8 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.690600143Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.859+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,879] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.692238314Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.640511ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.882+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:39:23,907] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 12:42:04 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.698749781Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.965+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:39:24,228] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.login.retry.backoff.ms = 100 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.715786606Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.038085ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:03.986+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:39:24,249] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.mechanism = GSSAPI 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.764210345Z level=info msg="Executing migration" id="create login_attempt v2" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.071+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:39:24,249] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 12:42:04 policy-db-migrator | 12:42:04 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.765436971Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.227536ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.091+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:39:24,255] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 12:42:04 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 12:42:04 policy-pap | sasl.oauthbearer.expected.audience = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.76912496Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.176+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | sasl.oauthbearer.expected.issuer = null 12:42:04 kafka | [2024-04-25 12:39:24,259] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.770601059Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.475449ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.195+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.776154782Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.280+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,286] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.776450556Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=297.554µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.299+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,289] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.779760361Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.386+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,291] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.780654562Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=890.961µs 12:42:04 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 12:42:04 kafka | [2024-04-25 12:39:24,291] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.405+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.79341839Z level=info msg="Executing migration" id="create user auth table" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,292] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.491+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.795245095Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.830465ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 kafka | [2024-04-25 12:39:24,307] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 12:42:04 policy-pap | security.protocol = PLAINTEXT 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.511+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.805271338Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,308] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 12:42:04 policy-pap | security.providers = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.597+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.806834078Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.56641ms 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,347] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 12:42:04 policy-pap | send.buffer.bytes = 131072 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.616+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.811215306Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,375] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714048764365,1714048764365,1,0,0,72057619973079041,258,0,27 12:42:04 policy-pap | session.timeout.ms = 45000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.702+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.811304657Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=90.041µs 12:42:04 policy-db-migrator | > upgrade 0630-toscanodetype.sql 12:42:04 kafka | (kafka.zk.KafkaZkClient) 12:42:04 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.720+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.815738105Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,376] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 12:42:04 policy-pap | socket.connection.setup.timeout.ms = 10000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.806+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.821025495Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.28669ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 12:42:04 kafka | [2024-04-25 12:39:24,428] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.825+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 19 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.832922493Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | ssl.cipher.suites = null 12:42:04 kafka | [2024-04-25 12:39:24,435] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.910+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.841014839Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.096286ms 12:42:04 policy-db-migrator | 12:42:04 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 kafka | [2024-04-25 12:39:24,442] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:04.929+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.850552635Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | ssl.endpoint.identification.algorithm = https 12:42:04 kafka | [2024-04-25 12:39:24,443] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.015+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.858171696Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=7.619991ms 12:42:04 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 12:42:04 policy-pap | ssl.engine.factory.class = null 12:42:04 kafka | [2024-04-25 12:39:24,456] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.034+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 21 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.865362771Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | ssl.key.password = null 12:42:04 kafka | [2024-04-25 12:39:24,509] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.119+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.873302856Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=7.918174ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 12:42:04 policy-pap | ssl.keymanager.algorithm = SunX509 12:42:04 kafka | [2024-04-25 12:39:24,513] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.137+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.88117951Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | ssl.keystore.certificate.chain = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.223+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.882037461Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=853.751µs 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,523] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 12:42:04 policy-pap | ssl.keystore.key = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.241+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 23 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.888572537Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,526] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 12:42:04 policy-pap | ssl.keystore.location = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.327+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.895490829Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.919042ms 12:42:04 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 12:42:04 kafka | [2024-04-25 12:39:24,530] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 12:42:04 policy-pap | ssl.keystore.password = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.344+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.902357449Z level=info msg="Executing migration" id="create server_lock table" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,541] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 12:42:04 policy-pap | ssl.keystore.type = JKS 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.432+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.903540195Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.181056ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 kafka | [2024-04-25 12:39:24,546] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.448+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 25 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.909842848Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 12:42:04 policy-pap | ssl.protocol = TLSv1.3 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,547] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.536+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.provider = null 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,557] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.551+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.912002307Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=2.159049ms 12:42:04 policy-pap | ssl.secure.random.implementation = null 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,558] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.641+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.916141511Z level=info msg="Executing migration" id="create user auth token table" 12:42:04 policy-pap | ssl.trustmanager.algorithm = PKIX 12:42:04 kafka | [2024-04-25 12:39:24,563] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.654+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 27 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0660-toscaparameter.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.917386728Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.247617ms 12:42:04 policy-pap | ssl.truststore.certificates = null 12:42:04 kafka | [2024-04-25 12:39:24,566] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.745+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.930601512Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 12:42:04 policy-pap | ssl.truststore.location = null 12:42:04 kafka | [2024-04-25 12:39:24,569] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.756+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.931859209Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.253467ms 12:42:04 policy-pap | ssl.truststore.password = null 12:42:04 kafka | [2024-04-25 12:39:24,586] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.850+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.94185015Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 12:42:04 policy-pap | ssl.truststore.type = JKS 12:42:04 kafka | [2024-04-25 12:39:24,593] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.859+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 29 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.943600064Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.750594ms 12:42:04 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 kafka | [2024-04-25 12:39:24,600] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.959+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.95079882Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 12:42:04 policy-pap | 12:42:04 kafka | [2024-04-25 12:39:24,606] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:05.965+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0670-toscapolicies.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.952678724Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.879424ms 12:42:04 policy-pap | [2024-04-25T12:40:01.053+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 kafka | [2024-04-25 12:39:24,616] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.063+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:01.053+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.962761787Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 12:42:04 kafka | [2024-04-25 12:39:24,616] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.073+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 31 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:01.053+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048801053 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.971933828Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.172431ms 12:42:04 kafka | [2024-04-25 12:39:24,616] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.166+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:01.053+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Subscribed to topic(s): policy-pdp-pap 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.977959857Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 12:42:04 kafka | [2024-04-25 12:39:24,617] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.176+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:01.054+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.980357639Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=2.402112ms 12:42:04 kafka | [2024-04-25 12:39:24,617] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.270+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:01.054+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=adf16b33-6825-4228-b603-1e51991b0aaa, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1cc81ea1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.989426118Z level=info msg="Executing migration" id="create cache_data table" 12:42:04 kafka | [2024-04-25 12:39:24,618] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 12:42:04 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.280+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 33 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:01.054+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=adf16b33-6825-4228-b603-1e51991b0aaa, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.990324901Z level=info msg="Migration successfully executed" id="create cache_data table" duration=898.033µs 12:42:04 kafka | [2024-04-25 12:39:24,620] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.374+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:01.054+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:26.998790083Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 12:42:04 kafka | [2024-04-25 12:39:24,621] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.383+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | allow.auto.create.topics = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.000639357Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.848304ms 12:42:04 kafka | [2024-04-25 12:39:24,621] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.479+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | auto.commit.interval.ms = 5000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.008647642Z level=info msg="Executing migration" id="create short_url table v1" 12:42:04 kafka | [2024-04-25 12:39:24,622] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.510+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 35 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | auto.include.jmx.reporter = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.010298005Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.652643ms 12:42:04 kafka | [2024-04-25 12:39:24,622] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.582+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | auto.offset.reset = latest 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.021748116Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 12:42:04 kafka | [2024-04-25 12:39:24,626] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.614+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | bootstrap.servers = [kafka:9092] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.023887854Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.138848ms 12:42:04 kafka | [2024-04-25 12:39:24,630] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 12:42:04 policy-db-migrator | > upgrade 0690-toscapolicy.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.685+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | check.crcs = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.39278261Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 12:42:04 kafka | [2024-04-25 12:39:24,633] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.718+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 37 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | client.dns.lookup = use_all_dns_ips 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.392983363Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=203.823µs 12:42:04 kafka | [2024-04-25 12:39:24,634] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.792+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | client.id = consumer-policy-pap-4 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.476399474Z level=info msg="Executing migration" id="delete alert_definition table" 12:42:04 kafka | [2024-04-25 12:39:24,640] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.822+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | client.rack = 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.476565206Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=169.392µs 12:42:04 kafka | [2024-04-25 12:39:24,640] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.896+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | connections.max.idle.ms = 540000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.728396557Z level=info msg="Executing migration" id="recreate alert_definition table" 12:42:04 kafka | [2024-04-25 12:39:24,640] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:06.927+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 39 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | default.api.timeout.ms = 60000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.729992229Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.598942ms 12:42:04 kafka | [2024-04-25 12:39:24,641] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 12:42:04 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.003+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | enable.auto.commit = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.886959581Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 12:42:04 kafka | [2024-04-25 12:39:24,642] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.031+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | exclude.internal.topics = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:27.888988408Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.031377ms 12:42:04 kafka | [2024-04-25 12:39:24,644] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.108+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | fetch.max.bytes = 52428800 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.037435976Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,645] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.134+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 41 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | fetch.max.wait.ms = 500 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.039123208Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.689762ms 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,646] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.210+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | fetch.min.bytes = 1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.054732744Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,646] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.238+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | group.id = policy-pap 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.054861746Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=126.751µs 12:42:04 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 12:42:04 kafka | [2024-04-25 12:39:24,646] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.8:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.315+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | group.instance.id = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.063481199Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,648] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.343+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 43 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | heartbeat.interval.ms = 3000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.064899387Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.418448ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 12:42:04 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.419+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | interceptor.classes = [] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.068773929Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.447+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | internal.leave.group.on.close = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.069715241Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=942.442µs 12:42:04 policy-db-migrator | 12:42:04 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.521+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.073771685Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 12:42:04 policy-db-migrator | 12:42:04 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.551+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 45 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | isolation.level = read_uncommitted 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.074799198Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.027043ms 12:42:04 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 12:42:04 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.625+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.080170219Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,658] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.655+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | max.partition.fetch.bytes = 1048576 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.082060673Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.893554ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 kafka | [2024-04-25 12:39:24,658] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.727+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | max.poll.interval.ms = 300000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.088792983Z level=info msg="Executing migration" id="Add column paused in alert_definition" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,660] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.760+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 47 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | max.poll.records = 500 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.095652413Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.85773ms 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,660] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.830+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | metadata.max.age.ms = 300000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.105964439Z level=info msg="Executing migration" id="drop alert_definition table" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,660] INFO Kafka startTimeMs: 1714048764652 (org.apache.kafka.common.utils.AppInfoParser) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.863+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | metric.reporters = [] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.107290556Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.329817ms 12:42:04 policy-db-migrator | > upgrade 0730-toscaproperty.sql 12:42:04 kafka | [2024-04-25 12:39:24,661] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.933+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | metrics.num.samples = 2 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.11291126Z level=info msg="Executing migration" id="delete alert_definition_version table" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,661] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:07.966+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 49 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | metrics.recording.level = INFO 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.113201975Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=290.045µs 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 kafka | [2024-04-25 12:39:24,661] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.037+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | metrics.sample.window.ms = 30000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.117941597Z level=info msg="Executing migration" id="recreate alert_definition_version table" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,662] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.070+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.119356396Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.41673ms 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,663] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.141+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | receive.buffer.bytes = 65536 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.169971623Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:24,677] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.182+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 51 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | reconnect.backoff.max.ms = 1000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.172671929Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.707685ms 12:42:04 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 12:42:04 kafka | [2024-04-25 12:39:24,762] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.244+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | reconnect.backoff.ms = 50 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.18190061Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,820] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.285+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | request.timeout.ms = 30000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.18341042Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.50946ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 12:42:04 kafka | [2024-04-25 12:39:24,825] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.349+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 102 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | retry.backoff.ms = 100 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.213911792Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:39:24,863] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.389+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 53 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.client.callback.handler.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.214251267Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=342.825µs 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:29,679] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.460+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 104 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.jaas.config = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.221668284Z level=info msg="Executing migration" id="drop alert_definition_version table" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:39:29,680] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.493+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.223196925Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.531301ms 12:42:04 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 12:42:04 kafka | [2024-04-25 12:40:01,537] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.563+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 106 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.227153136Z level=info msg="Executing migration" id="create alert_instance table" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:01,538] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.598+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 55 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.kerberos.service.name = null 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.228538915Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.385669ms 12:42:04 kafka | [2024-04-25 12:40:01,810] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.667+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 108 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.235865271Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 12:42:04 kafka | [2024-04-25 12:40:02,019] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.701+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.237535943Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.672312ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.770+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 110 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,085] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(HOyl9LomSW2VRWzaH4p5QQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(hlyPC_3zQpGmePqsd4AOeA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 12:42:04 policy-pap | sasl.login.callback.handler.class = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.245298706Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.804+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 57 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,086] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 12:42:04 policy-pap | sasl.login.class = null 12:42:04 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.246667314Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.371457ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.874+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 112 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,089] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.login.connect.timeout.ms = null 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.250851729Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.908+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.login.read.timeout.ms = null 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.260775059Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.9183ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:08.979+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 114 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.267904774Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.012+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 59 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.269688837Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.787073ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.082+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 116 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.window.factor = 0.8 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.277062234Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.117+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:42:04 policy-db-migrator | > upgrade 0770-toscarequirement.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.278513114Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.45431ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.186+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 118 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.288662737Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.220+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 61 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.login.retry.backoff.ms = 100 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.315003755Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=26.340058ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.290+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 120 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.mechanism = GSSAPI 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.324893595Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.323+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,090] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.349902024Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=25.013089ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.393+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 122 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.expected.audience = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.353914228Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.426+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 63 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.expected.issuer = null 12:42:04 policy-db-migrator | > upgrade 0780-toscarequirements.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.354669998Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=757.68µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.497+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 124 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.358234884Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.532+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.358976714Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=741.86µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.602+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 126 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.365328398Z level=info msg="Executing migration" id="add current_reason column related to current_state" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.635+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 65 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.371374868Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.04591ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.706+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 128 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.376700378Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.742+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:42:04 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.381243387Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.537889ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.810+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 130 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,091] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.38594989Z level=info msg="Executing migration" id="create alert_rule table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.844+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 67 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | security.protocol = PLAINTEXT 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.387299617Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.350027ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.913+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 132 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | security.providers = null 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.393343398Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:09.947+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | send.buffer.bytes = 131072 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.394491462Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.150294ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.015+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 134 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | session.timeout.ms = 45000 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.398793719Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.050+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 69 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.400733484Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.940975ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.118+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 136 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | socket.connection.setup.timeout.ms = 10000 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.405326905Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.156+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.cipher.suites = null 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.406851095Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.52688ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.222+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 138 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.419670014Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.260+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 71 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.endpoint.identification.algorithm = https 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.420043919Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=378.915µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.326+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 140 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.engine.factory.class = null 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.428416899Z level=info msg="Executing migration" id="add column for to alert_rule" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.364+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.key.password = null 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.437415178Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.993509ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.429+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 142 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.keymanager.algorithm = SunX509 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | ssl.keystore.certificate.chain = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.467+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 73 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.443272375Z level=info msg="Executing migration" id="add column annotations to alert_rule" 12:42:04 policy-pap | ssl.keystore.key = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.532+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 144 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.45122808Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.940905ms 12:42:04 policy-pap | ssl.keystore.location = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.572+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.455444296Z level=info msg="Executing migration" id="add column labels to alert_rule" 12:42:04 policy-pap | ssl.keystore.password = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.635+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 146 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.46109361Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.646344ms 12:42:04 policy-pap | ssl.keystore.type = JKS 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.674+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 75 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,092] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0820-toscatrigger.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.470554275Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 12:42:04 policy-pap | ssl.protocol = TLSv1.3 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.738+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 148 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.472370669Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.816304ms 12:42:04 policy-pap | ssl.provider = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.777+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.479717045Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 12:42:04 policy-pap | ssl.secure.random.implementation = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.841+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 150 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.481028423Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.312998ms 12:42:04 policy-pap | ssl.trustmanager.algorithm = PKIX 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.880+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 77 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.558378573Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 12:42:04 policy-pap | ssl.truststore.certificates = null 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.945+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 152 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.564365172Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.983599ms 12:42:04 policy-pap | ssl.truststore.location = null 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:10.983+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.663680601Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 12:42:04 policy-pap | ssl.truststore.password = null 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.048+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 154 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.671880099Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=8.197848ms 12:42:04 policy-pap | ssl.truststore.type = JKS 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.086+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 79 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 12:42:04 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.678952152Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.152+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 156 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.680493752Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.54504ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.189+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:01.059+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 kafka | [2024-04-25 12:40:02,093] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.684741889Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.254+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 158 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.694943143Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=10.193924ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.292+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 81 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 12:42:04 policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048801059 12:42:04 kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.703189741Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.357+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 160 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 12:42:04 kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.708049846Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.857894ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.395+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 12:42:04 policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|ServiceManager|main] Policy PAP starting topics 12:42:04 kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.716342445Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.460+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 162 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:02,094] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.716465036Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=117.561µs 12:42:04 policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=adf16b33-6825-4228-b603-1e51991b0aaa, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.498+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 83 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.725491165Z level=info msg="Executing migration" id="create alert_rule_version table" 12:42:04 policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=53d3b957-3026-4843-bc4f-55d426241089, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.563+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 164 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.726942795Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.45341ms 12:42:04 policy-pap | [2024-04-25T12:40:01.060+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4a68cc94-8fc9-4290-b3af-12928780cd05, alive=false, publisher=null]]: starting 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.601+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.735107373Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 12:42:04 policy-pap | [2024-04-25T12:40:01.076+00:00|INFO|ProducerConfig|main] ProducerConfig values: 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.668+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 166 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.73648946Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.383247ms 12:42:04 policy-pap | acks = -1 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.704+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 85 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.741936202Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 12:42:04 policy-pap | auto.include.jmx.reporter = true 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.771+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 168 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.743201229Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.262997ms 12:42:04 policy-pap | batch.size = 16384 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.809+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.748521209Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 12:42:04 policy-pap | bootstrap.servers = [kafka:9092] 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.876+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 170 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.74862304Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=104.451µs 12:42:04 policy-pap | buffer.memory = 33554432 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.912+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 87 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.752757095Z level=info msg="Executing migration" id="add column for to alert_rule_version" 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | client.dns.lookup = use_all_dns_ips 12:42:04 policy-apex-pdp | [2024-04-25T12:40:11.981+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 172 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.758817955Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.05758ms 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | client.id = producer-1 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.017+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.776289545Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | compression.type = none 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.087+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 174 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.782428796Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.13622ms 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | connections.max.idle.ms = 540000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.120+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 89 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.790038456Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 12:42:04 kafka | [2024-04-25 12:40:02,100] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | delivery.timeout.ms = 120000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.191+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 176 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.796034185Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.992439ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | enable.idempotence = true 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.224+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.801404196Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | interceptor.classes = [] 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.295+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 178 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.807535837Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.12094ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.328+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 91 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.8115519Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | linger.ms = 0 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.400+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 180 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.818868586Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.309966ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | max.block.ms = 60000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.431+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.867590559Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | max.in.flight.requests.per.connection = 5 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.502+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 182 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.86770924Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=122.401µs 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | max.request.size = 1048576 12:42:04 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.534+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 93 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.874225296Z level=info msg="Executing migration" id=create_alert_configuration_table 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | metadata.max.age.ms = 300000 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.604+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 184 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.87525471Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.031604ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | metadata.max.idle.ms = 300000 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.649+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.882634157Z level=info msg="Executing migration" id="Add column default in alert_configuration" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | metric.reporters = [] 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.707+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 186 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.889799402Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.162635ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | metrics.num.samples = 2 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.752+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 95 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.893825204Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | metrics.recording.level = INFO 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.809+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 188 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.893919245Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=96.691µs 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | metrics.sample.window.ms = 30000 12:42:04 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.856+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.898067651Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | partitioner.adaptive.partitioning.enable = true 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.912+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 190 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.909466311Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=11.39232ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | partitioner.availability.timeout.ms = 0 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:12.959+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 97 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.917032871Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | partitioner.class = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.016+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 192 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.918264747Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.234116ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | partitioner.ignore.keys = false 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.061+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.924522779Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | receive.buffer.bytes = 32768 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.119+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 194 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.93297899Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.457291ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | reconnect.backoff.max.ms = 1000 12:42:04 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.164+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 99 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.938582474Z level=info msg="Executing migration" id=create_ngalert_configuration_table 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | reconnect.backoff.ms = 50 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.220+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 196 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.939647769Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.067555ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | request.timeout.ms = 30000 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.268+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.944703915Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | retries = 2147483647 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.323+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 198 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.946549449Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.842714ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | retry.backoff.ms = 100 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.371+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 101 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.951647877Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.client.callback.handler.class = null 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.426+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 200 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.961389075Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.737058ms 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.jaas.config = null 12:42:04 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.474+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 102 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.966795516Z level=info msg="Executing migration" id="create provenance_type table" 12:42:04 kafka | [2024-04-25 12:40:02,101] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.528+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 202 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.967661888Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=868.232µs 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.578+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 103 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.kerberos.service.name = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.976461634Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.631+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 204 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.978718704Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.25974ms 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.682+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 104 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.987053084Z level=info msg="Executing migration" id="create alert_image table" 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.734+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 206 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.login.callback.handler.class = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:28.988243859Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.193285ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.785+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 105 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.login.class = null 12:42:04 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.132678132Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.838+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 208 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.login.connect.timeout.ms = null 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.134022429Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.348607ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.888+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 106 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.login.read.timeout.ms = null 12:42:04 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.146612115Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.940+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 210 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.14691112Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=298.235µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:13.991+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 107 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.152515273Z level=info msg="Executing migration" id=create_alert_configuration_history_table 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.042+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 212 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.window.factor = 0.8 12:42:04 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.154205245Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.685372ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.099+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 108 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:42:04 kafka | [2024-04-25 12:40:02,102] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.161896637Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.145+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 214 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.login.retry.backoff.ms = 100 12:42:04 kafka | [2024-04-25 12:40:02,102] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.163109682Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.209705ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.203+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 109 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.mechanism = GSSAPI 12:42:04 kafka | [2024-04-25 12:40:03,937] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.167398299Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.248+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 216 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 kafka | [2024-04-25 12:40:03,937] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.1682777Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.306+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 110 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.expected.audience = null 12:42:04 kafka | [2024-04-25 12:40:03,937] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.172266634Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.350+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 218 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.expected.issuer = null 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.173107284Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=841.2µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.410+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 111 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.178450465Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.452+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 220 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.179815832Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.364697ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.513+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 112 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.184392803Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.555+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 222 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.193779466Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.388703ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.616+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 113 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.197112681Z level=info msg="Executing migration" id="create library_element table v1" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.659+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 224 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.197977462Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=864.791µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.720+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 114 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.206275591Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.761+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 226 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | security.protocol = PLAINTEXT 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.207679339Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.406538ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.823+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 115 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | security.providers = null 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.21148184Z level=info msg="Executing migration" id="create library_element_connection table v1" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.864+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 228 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | send.buffer.bytes = 131072 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.212775117Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.291417ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.927+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 116 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.21755667Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:14.966+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 230 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | socket.connection.setup.timeout.ms = 10000 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.219506666Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.949316ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.030+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 117 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.cipher.suites = null 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.224947856Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.069+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 232 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.226333365Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.384879ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.133+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 118 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.endpoint.identification.algorithm = https 12:42:04 kafka | [2024-04-25 12:40:03,938] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.23054552Z level=info msg="Executing migration" id="increase max description length to 2048" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.172+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 234 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.engine.factory.class = null 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.230635591Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=93.061µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.236+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 119 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.key.password = null 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.23509879Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.275+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 236 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.keymanager.algorithm = SunX509 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.235278673Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=180.013µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.339+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 120 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.keystore.certificate.chain = null 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.239840703Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.377+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 238 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.keystore.key = null 12:42:04 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.240313859Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=473.106µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.443+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 121 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.keystore.location = null 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.248947783Z level=info msg="Executing migration" id="create data_keys table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.480+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 240 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.keystore.password = null 12:42:04 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.250695706Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.748433ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.547+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 122 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.keystore.type = JKS 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.2562556Z level=info msg="Executing migration" id="create secrets table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.583+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 242 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | ssl.protocol = TLSv1.3 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.257227582Z level=info msg="Migration successfully executed" id="create secrets table" duration=972.082µs 12:42:04 policy-pap | ssl.provider = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.651+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 123 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.261157333Z level=info msg="Executing migration" id="rename data_keys name column to id" 12:42:04 policy-pap | ssl.secure.random.implementation = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.685+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 244 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.292869972Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=31.711279ms 12:42:04 policy-pap | ssl.trustmanager.algorithm = PKIX 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.755+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 124 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,939] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.297830066Z level=info msg="Executing migration" id="add name column into data_keys" 12:42:04 policy-pap | ssl.truststore.certificates = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.788+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 246 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.303133596Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.3035ms 12:42:04 policy-pap | ssl.truststore.location = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.858+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 125 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.308490117Z level=info msg="Executing migration" id="copy data_keys id column values into name" 12:42:04 policy-pap | ssl.truststore.password = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.890+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 248 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.30874169Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=251.293µs 12:42:04 policy-pap | ssl.truststore.type = JKS 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.960+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 126 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.314527626Z level=info msg="Executing migration" id="rename data_keys name column to label" 12:42:04 policy-pap | transaction.timeout.ms = 60000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:15.993+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 250 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.348435494Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.905198ms 12:42:04 policy-pap | transactional.id = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.066+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 127 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.354624725Z level=info msg="Executing migration" id="rename data_keys id column back to name" 12:42:04 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.096+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 252 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.387303375Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.67765ms 12:42:04 policy-pap | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.169+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 128 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.087+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.396628498Z level=info msg="Executing migration" id="create kv_store table v1" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.199+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 254 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.103+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.398103917Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.478419ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.272+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 129 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.103+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.402251742Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.302+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 256 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.103+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048801103 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.404281738Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.030296ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.376+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 130 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,940] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.104+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4a68cc94-8fc9-4290-b3af-12928780cd05, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.460359927Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.471+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 258 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.104+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=14d92362-e0b3-4597-b9c4-41b06f6af1c6, alive=false, publisher=null]]: starting 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.460880355Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=520.578µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.478+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 131 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.104+00:00|INFO|ProducerConfig|main] ProducerConfig values: 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.46889952Z level=info msg="Executing migration" id="create permission table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.574+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 260 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | acks = -1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.470489431Z level=info msg="Migration successfully executed" id="create permission table" duration=1.589741ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.580+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 132 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | auto.include.jmx.reporter = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.478230183Z level=info msg="Executing migration" id="add unique index permission.role_id" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.679+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 262 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 12:42:04 kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | batch.size = 16384 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.479379717Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.150204ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.683+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 133 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | bootstrap.servers = [kafka:9092] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.485366206Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.782+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 264 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | buffer.memory = 33554432 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.487651806Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.28582ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.786+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 134 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,941] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 12:42:04 policy-pap | client.dns.lookup = use_all_dns_ips 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.493464123Z level=info msg="Executing migration" id="create role table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.886+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 266 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 12:42:04 policy-pap | client.id = producer-2 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.494590478Z level=info msg="Migration successfully executed" id="create role table" duration=1.125745ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.889+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 135 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 12:42:04 policy-pap | compression.type = none 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.498480699Z level=info msg="Executing migration" id="add column display_name" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.989+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 268 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 12:42:04 policy-pap | connections.max.idle.ms = 540000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.506246482Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.760233ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:16.992+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 136 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 12:42:04 policy-pap | delivery.timeout.ms = 120000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.50993194Z level=info msg="Executing migration" id="add column group_name" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.093+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 270 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 12:42:04 policy-pap | enable.idempotence = true 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.517843984Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.911304ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.095+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 137 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 12:42:04 policy-pap | interceptor.classes = [] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.523472509Z level=info msg="Executing migration" id="add index role.org_id" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.197+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 272 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 12:42:04 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.524549973Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.077433ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.200+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 138 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 12:42:04 policy-pap | linger.ms = 0 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.528598836Z level=info msg="Executing migration" id="add unique index role_org_id_name" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.299+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 274 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 12:42:04 policy-pap | max.block.ms = 60000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.530079725Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.480799ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.304+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 139 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 12:42:04 policy-pap | max.in.flight.requests.per.connection = 5 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.533722904Z level=info msg="Executing migration" id="add index role_org_id_uid" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.403+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 276 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 12:42:04 policy-pap | max.request.size = 1048576 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.534837298Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.116214ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.409+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 140 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,943] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 12:42:04 policy-pap | metadata.max.age.ms = 300000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.547795179Z level=info msg="Executing migration" id="create team role table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.507+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 278 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 12:42:04 policy-pap | metadata.max.idle.ms = 300000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.549340219Z level=info msg="Migration successfully executed" id="create team role table" duration=1.54472ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.511+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 141 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 12:42:04 policy-pap | metric.reporters = [] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.55472763Z level=info msg="Executing migration" id="add index team_role.org_id" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.609+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 280 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 12:42:04 policy-pap | metrics.num.samples = 2 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.556618315Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.890405ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.615+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 142 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 12:42:04 policy-pap | metrics.recording.level = INFO 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.560604117Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.713+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 282 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 policy-pap | metrics.sample.window.ms = 30000 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.561868765Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.264438ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.717+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 143 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | partitioner.adaptive.partitioning.enable = true 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.641192479Z level=info msg="Executing migration" id="add index team_role.team_id" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.816+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 285 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 policy-pap | partitioner.availability.timeout.ms = 0 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.643249856Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.058177ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.820+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 144 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 policy-pap | partitioner.class = null 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.655084642Z level=info msg="Executing migration" id="create user role table" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.918+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 287 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 12:42:04 policy-pap | partitioner.ignore.keys = false 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.656167026Z level=info msg="Migration successfully executed" id="create user role table" duration=1.081154ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:17.922+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 145 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | receive.buffer.bytes = 32768 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.674217994Z level=info msg="Executing migration" id="add index user_role.org_id" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.022+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 289 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 12:42:04 policy-pap | reconnect.backoff.max.ms = 1000 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.676251251Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=2.033607ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.025+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 146 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | reconnect.backoff.ms = 50 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.683889421Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.126+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 291 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.685365531Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.4786ms 12:42:04 policy-pap | request.timeout.ms = 30000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.128+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 147 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.693985775Z level=info msg="Executing migration" id="add index user_role.user_id" 12:42:04 policy-pap | retries = 2147483647 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.229+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 148 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | > upgrade 0100-pdp.sql 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.69517818Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.196665ms 12:42:04 policy-pap | retry.backoff.ms = 100 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.230+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 293 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.704198229Z level=info msg="Executing migration" id="create builtin role table" 12:42:04 policy-pap | sasl.client.callback.handler.class = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.330+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 149 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.705860311Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.662241ms 12:42:04 policy-pap | sasl.jaas.config = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.333+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 295 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.711526095Z level=info msg="Executing migration" id="add index builtin_role.role_id" 12:42:04 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.434+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 150 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.712831093Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.305578ms 12:42:04 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.436+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 297 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.716095046Z level=info msg="Executing migration" id="add index builtin_role.name" 12:42:04 policy-pap | sasl.kerberos.service.name = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.537+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 151 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.717248421Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.153494ms 12:42:04 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.539+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 299 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.720612605Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 12:42:04 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.640+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 152 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.728839753Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.227109ms 12:42:04 policy-pap | sasl.login.callback.handler.class = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.642+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 301 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.733005098Z level=info msg="Executing migration" id="add index builtin_role.org_id" 12:42:04 policy-pap | sasl.login.class = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.743+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 153 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.734363096Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.357268ms 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 12:42:04 policy-pap | sasl.login.connect.timeout.ms = null 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.745+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 303 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.738520791Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 12:42:04 policy-pap | sasl.login.read.timeout.ms = null 12:42:04 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.847+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 305 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.739649966Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.128605ms 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.buffer.seconds = 300 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.848+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 154 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.748578013Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 12:42:04 policy-pap | sasl.login.refresh.min.period.seconds = 60 12:42:04 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.949+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 307 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.750156354Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.577241ms 12:42:04 policy-pap | sasl.login.refresh.window.factor = 0.8 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:18.951+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 155 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.754694134Z level=info msg="Executing migration" id="add unique index role.uid" 12:42:04 policy-pap | sasl.login.refresh.window.jitter = 0.05 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.052+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 309 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,944] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.756488817Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.794723ms 12:42:04 policy-pap | sasl.login.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.056+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 156 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.765332604Z level=info msg="Executing migration" id="create seed assignment table" 12:42:04 policy-pap | sasl.login.retry.backoff.ms = 100 12:42:04 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Error while fetching metadata with correlation id 311 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.766165335Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=841.171µs 12:42:04 policy-pap | sasl.mechanism = GSSAPI 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.157+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 157 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.774123449Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 12:42:04 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 12:42:04 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.266+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 12:42:04 kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.775983114Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.859125ms 12:42:04 policy-pap | sasl.oauthbearer.expected.audience = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.273+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] (Re-)joining group 12:42:04 kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.783455613Z level=info msg="Executing migration" id="add column hidden to role table" 12:42:04 policy-pap | sasl.oauthbearer.expected.issuer = null 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Request joining group due to: need to re-join with the given member-id: consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e 12:42:04 kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.791428928Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.971775ms 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 12:42:04 kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.79536221Z level=info msg="Executing migration" id="permission kind migration" 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 12:42:04 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:19.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] (Re-)joining group 12:42:04 kafka | [2024-04-25 12:40:03,945] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.803394556Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.034425ms 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e', protocol='range'} 12:42:04 kafka | [2024-04-25 12:40:03,946] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.80682643Z level=info msg="Executing migration" id="permission attribute migration" 12:42:04 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 12:42:04 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.336+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Finished assignment for group at generation 1: {consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e=Assignment(partitions=[policy-pdp-pap-0])} 12:42:04 kafka | [2024-04-25 12:40:03,948] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.813110223Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.280653ms 12:42:04 policy-pap | sasl.oauthbearer.scope.claim.name = scope 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.370+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e', protocol='range'} 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.818544775Z level=info msg="Executing migration" id="permission identifier migration" 12:42:04 policy-pap | sasl.oauthbearer.sub.claim.name = sub 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.371+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.864335278Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=45.789613ms 12:42:04 policy-pap | sasl.oauthbearer.token.endpoint.url = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.373+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Adding newly assigned partitions: policy-pdp-pap-0 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.90318776Z level=info msg="Executing migration" id="add permission identifier index" 12:42:04 policy-pap | security.protocol = PLAINTEXT 12:42:04 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.392+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Found no committed offset for partition policy-pdp-pap-0 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.905574101Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=2.383581ms 12:42:04 policy-pap | security.providers = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.407+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2, groupId=4b79aeb3-604a-4e33-80d9-cdeedf19ce63] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.911920535Z level=info msg="Executing migration" id="add permission action scope role_id index" 12:42:04 policy-pap | send.buffer.bytes = 131072 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.843+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.914722922Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.804697ms 12:42:04 policy-pap | socket.connection.setup.timeout.max.ms = 30000 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a408b809-2a21-46db-ba3c-dbdbae06aca1","timestampMs":1714048822843,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.919122409Z level=info msg="Executing migration" id="remove permission role_id action scope index" 12:42:04 policy-pap | socket.connection.setup.timeout.ms = 10000 12:42:04 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.902+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.920617249Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.49506ms 12:42:04 policy-pap | ssl.cipher.suites = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a408b809-2a21-46db-ba3c-dbdbae06aca1","timestampMs":1714048822843,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.928016076Z level=info msg="Executing migration" id="create query_history table v1" 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 12:42:04 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 12:42:04 policy-apex-pdp | [2024-04-25T12:40:22.905+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.929573007Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.556071ms 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.endpoint.identification.algorithm = https 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.404+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.937716045Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.engine.factory.class = null 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","timestampMs":1714048823348,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.938817249Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.101134ms 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.key.password = null 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.410+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.942819982Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.keymanager.algorithm = SunX509 12:42:04 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.410+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.943047845Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=227.933µs 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.keystore.certificate.chain = null 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"12f5c5ff-7241-48ae-ba7d-84cdf580311e","timestampMs":1714048823410,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.947872349Z level=info msg="Executing migration" id="rbac disabled migrator" 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.keystore.key = null 12:42:04 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.411+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.948022631Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=73.891µs 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.keystore.location = null 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.953941459Z level=info msg="Executing migration" id="teams permissions migration" 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"896f628b-2422-4a7a-9645-8164618b395e","timestampMs":1714048823411,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.keystore.password = null 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.95479831Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=858.142µs 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.427+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.keystore.type = JKS 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.961236024Z level=info msg="Executing migration" id="dashboard permissions" 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"12f5c5ff-7241-48ae-ba7d-84cdf580311e","timestampMs":1714048823410,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.protocol = TLSv1.3 12:42:04 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.427+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:42:04 kafka | [2024-04-25 12:40:03,949] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.provider = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.961887123Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=652.039µs 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.427+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.secure.random.implementation = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.967859492Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 12:42:04 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"896f628b-2422-4a7a-9645-8164618b395e","timestampMs":1714048823411,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.trustmanager.algorithm = PKIX 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.968573491Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=714.189µs 12:42:04 policy-db-migrator | JOIN pdpstatistics b 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.427+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.truststore.certificates = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.972653764Z level=info msg="Executing migration" id="drop managed folder create actions" 12:42:04 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.640+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.truststore.location = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.972938978Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=284.644µs 12:42:04 policy-db-migrator | SET a.id = b.id 12:42:04 policy-apex-pdp | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9c94d082-5dc3-41dd-b822-97664ab4caac","timestampMs":1714048823350,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.truststore.password = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.98218395Z level=info msg="Executing migration" id="alerting notification permissions" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.643+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | ssl.truststore.type = JKS 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.983065052Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=881.672µs 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9c94d082-5dc3-41dd-b822-97664ab4caac","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bc2bc4b3-e57a-4454-b7dc-ad8eea338f0c","timestampMs":1714048823643,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | transaction.timeout.ms = 60000 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.993629981Z level=info msg="Executing migration" id="create query_history_star table v1" 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.652+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | transactional.id = null 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:29.995068529Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.437768ms 12:42:04 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9c94d082-5dc3-41dd-b822-97664ab4caac","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bc2bc4b3-e57a-4454-b7dc-ad8eea338f0c","timestampMs":1714048823643,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.002043012Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.652+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.004274121Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.233029ms 12:42:04 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.691+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.105+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.014608687Z level=info msg="Executing migration" id="add column org_id in query_history_star" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","timestampMs":1714048823667,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.022776004Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.164217ms 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.693+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.033169771Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 12:42:04 policy-db-migrator | 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9d346855-4bc1-4b25-8a3b-1b11512efc29","timestampMs":1714048823692,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714048801108 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.033385284Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=215.363µs 12:42:04 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.708+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=14d92362-e0b3-4597-b9c4-41b06f6af1c6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.044559751Z level=info msg="Executing migration" id="create correlation table v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9d346855-4bc1-4b25-8a3b-1b11512efc29","timestampMs":1714048823692,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.047008994Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.446872ms 12:42:04 policy-apex-pdp | [2024-04-25T12:40:23.709+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.108+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.062864562Z level=info msg="Executing migration" id="add index correlations.uid" 12:42:04 policy-apex-pdp | [2024-04-25T12:40:56.182+00:00|INFO|RequestLog|qtp1863100050-33] 172.17.0.4 - policyadmin [25/Apr/2024:12:40:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.51.2" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.109+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.066397329Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=3.531887ms 12:42:04 policy-apex-pdp | [2024-04-25T12:41:56.083+00:00|INFO|RequestLog|qtp1863100050-28] 172.17.0.4 - policyadmin [25/Apr/2024:12:41:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.51.2" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.110+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.071249883Z level=info msg="Executing migration" id="add index correlations.source_uid" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.112+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.072636411Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.386698ms 12:42:04 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.113+00:00|INFO|TimerManager|Thread-9] timer manager update started 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.109708828Z level=info msg="Executing migration" id="add correlation config column" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.113+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.122800861Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.092923ms 12:42:04 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.113+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.131031099Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.113+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.132473548Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.442159ms 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.114+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.136691384Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.116+00:00|INFO|ServiceManager|main] Policy PAP started 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.137810708Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.118564ms 12:42:04 policy-db-migrator | > upgrade 0210-sequence.sql 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.117+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.661 seconds (process running for 10.255) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.149118607Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.520+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.175859579Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=26.743132ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.520+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 6HLElDkITkKpDhaqvETosg 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.181373512Z level=info msg="Executing migration" id="create correlation v2" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.521+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Cluster ID: 6HLElDkITkKpDhaqvETosg 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.182384085Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.010133ms 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,950] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:01.523+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 6HLElDkITkKpDhaqvETosg 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.185515046Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:01.622+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:03,950] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.186470569Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=955.063µs 12:42:04 policy-db-migrator | > upgrade 0220-sequence.sql 12:42:04 policy-pap | [2024-04-25T12:40:01.725+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:03,953] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.19343247Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:01.828+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:03,954] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.195513928Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.079968ms 12:42:04 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 12:42:04 policy-pap | [2024-04-25T12:40:01.931+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:03,954] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.204132951Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:02.032+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:03,954] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.206660135Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.529463ms 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:02.052+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 12:42:04 kafka | [2024-04-25 12:40:03,954] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.232374703Z level=info msg="Executing migration" id="copy correlation v1 to v2" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:02.056+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.233121372Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=750.329µs 12:42:04 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 12:42:04 policy-pap | [2024-04-25T12:40:02.056+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 6HLElDkITkKpDhaqvETosg 12:42:04 kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.240929315Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:02.068+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.243263986Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=2.334211ms 12:42:04 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 12:42:04 kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:02.168+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.252635259Z level=info msg="Executing migration" id="add provisioning column" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:02.228+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.261813971Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.178862ms 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,955] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:02.664+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.265129564Z level=info msg="Executing migration" id="create entity_events table" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:02.883+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.266244158Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.114294ms 12:42:04 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 12:42:04 kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:02.956+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.294548481Z level=info msg="Executing migration" id="create dashboard public config v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:03.325+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.296821821Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=2.273511ms 12:42:04 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 12:42:04 kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:03.628+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.304111457Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:03.924+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.304646904Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,956] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:03.934+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.30817953Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:04.029+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.309059152Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 12:42:04 policy-db-migrator | > upgrade 0120-toscatrigger.sql 12:42:04 kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:04.040+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.317351661Z level=info msg="Executing migration" id="Drop old dashboard public config table" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:04.134+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 19 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.31882527Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.47299ms 12:42:04 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 12:42:04 kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:04.145+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.32491926Z level=info msg="Executing migration" id="recreate dashboard public config v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:04.240+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 21 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,957] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.326184588Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.302958ms 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:04.249+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.331153092Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:04.345+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 23 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.332394559Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.241127ms 12:42:04 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 12:42:04 policy-pap | [2024-04-25T12:40:04.354+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.34083585Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:04.451+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 25 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.342630063Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.792063ms 12:42:04 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 12:42:04 policy-pap | [2024-04-25T12:40:04.459+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.347068912Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:04.557+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 27 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.348845926Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.779904ms 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:04.564+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,958] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.353851491Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:04.661+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 29 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.355143568Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.292167ms 12:42:04 policy-db-migrator | > upgrade 0140-toscaparameter.sql 12:42:04 policy-pap | [2024-04-25T12:40:04.669+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:04.766+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 31 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.360201645Z level=info msg="Executing migration" id="Drop public config table" 12:42:04 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 12:42:04 policy-pap | [2024-04-25T12:40:04.774+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.361661144Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.458639ms 12:42:04 policy-pap | [2024-04-25T12:40:04.872+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 33 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.366585559Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 12:42:04 kafka | [2024-04-25 12:40:03,959] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:04.878+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.367821734Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.237265ms 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:04.977+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 35 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.373656282Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 12:42:04 policy-pap | [2024-04-25T12:40:04.982+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0150-toscaproperty.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.374770517Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.112595ms 12:42:04 kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:05.082+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 37 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.379265346Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 12:42:04 kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 12:42:04 policy-pap | [2024-04-25T12:40:05.090+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.380492412Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.226756ms 12:42:04 kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:05.186+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 39 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.387180669Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 12:42:04 kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:05.195+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.389173666Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.993007ms 12:42:04 kafka | [2024-04-25 12:40:03,960] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:05.291+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 41 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.395708792Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 12:42:04 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 12:42:04 kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.298+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.419407144Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.699372ms 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.396+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 43 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.429487446Z level=info msg="Executing migration" id="add annotations_enabled column" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.403+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.442507818Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=13.021182ms 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.500+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 45 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.451095221Z level=info msg="Executing migration" id="add time_selection_enabled column" 12:42:04 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 12:42:04 kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.508+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.459596653Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.499952ms 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:05.606+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 47 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.466820638Z level=info msg="Executing migration" id="delete orphaned public dashboards" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.614+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.4670026Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=182.142µs 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,961] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.710+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 49 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.471892045Z level=info msg="Executing migration" id="add share column" 12:42:04 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 12:42:04 kafka | [2024-04-25 12:40:03,962] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.717+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.481962247Z level=info msg="Migration successfully executed" id="add share column" duration=10.073482ms 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,962] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.814+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 51 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.486958923Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 12:42:04 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 12:42:04 kafka | [2024-04-25 12:40:03,962] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.820+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.487181146Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=222.393µs 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.918+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 53 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.49812085Z level=info msg="Executing migration" id="create file table" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:05.927+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.499789382Z level=info msg="Migration successfully executed" id="create file table" duration=1.668212ms 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.024+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 55 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.506136415Z level=info msg="Executing migration" id="file table idx: path natural pk" 12:42:04 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.030+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.507991189Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.851794ms 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.135+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 57 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.514912071Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.135+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.516170027Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.258356ms 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.238+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 59 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.521086632Z level=info msg="Executing migration" id="create file_meta table" 12:42:04 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.242+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.521929393Z level=info msg="Migration successfully executed" id="create file_meta table" duration=842.831µs 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.343+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 61 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.525682993Z level=info msg="Executing migration" id="file table idx: path key" 12:42:04 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.348+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.526875688Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.192455ms 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.447+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 63 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.580095168Z level=info msg="Executing migration" id="set path collation in file table" 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.453+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.580377642Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=285.784µs 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.551+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 65 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.588992765Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 12:42:04 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:06.557+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.589257659Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=264.574µs 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.599370662Z level=info msg="Executing migration" id="managed permissions migration" 12:42:04 policy-pap | [2024-04-25T12:40:06.655+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 67 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.600563268Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.195216ms 12:42:04 policy-pap | [2024-04-25T12:40:06.660+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.605518573Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 12:42:04 policy-pap | [2024-04-25T12:40:06.761+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 69 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.605918228Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=402.125µs 12:42:04 policy-pap | [2024-04-25T12:40:06.766+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.610691881Z level=info msg="Executing migration" id="RBAC action name migrator" 12:42:04 policy-pap | [2024-04-25T12:40:06.864+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 71 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 12:42:04 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.612271892Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.581491ms 12:42:04 policy-pap | [2024-04-25T12:40:06.870+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.619388735Z level=info msg="Executing migration" id="Add UID column to playlist" 12:42:04 policy-pap | [2024-04-25T12:40:06.968+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 73 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.62813033Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.737845ms 12:42:04 policy-pap | [2024-04-25T12:40:06.973+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.633741504Z level=info msg="Executing migration" id="Update uid column values in playlist" 12:42:04 policy-pap | [2024-04-25T12:40:07.071+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 75 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0100-upgrade.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.634079069Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=339.985µs 12:42:04 policy-pap | [2024-04-25T12:40:07.080+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.691110619Z level=info msg="Executing migration" id="Add index for uid in playlist" 12:42:04 policy-pap | [2024-04-25T12:40:07.177+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 77 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 12:42:04 policy-db-migrator | select 'upgrade to 1100 completed' as msg 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.693658683Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.550994ms 12:42:04 policy-pap | [2024-04-25T12:40:07.183+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.702334027Z level=info msg="Executing migration" id="update group index for alert rules" 12:42:04 policy-pap | [2024-04-25T12:40:07.281+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 79 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.702951465Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=621.788µs 12:42:04 policy-pap | [2024-04-25T12:40:07.286+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 12:42:04 policy-db-migrator | msg 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.70944039Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 12:42:04 policy-pap | [2024-04-25T12:40:07.385+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 81 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 12:42:04 policy-db-migrator | upgrade to 1100 completed 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.709833536Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=395.946µs 12:42:04 policy-pap | [2024-04-25T12:40:07.389+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.716882029Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 12:42:04 policy-pap | [2024-04-25T12:40:07.490+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 83 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.717567967Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=674.219µs 12:42:04 policy-pap | [2024-04-25T12:40:07.493+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.724361467Z level=info msg="Executing migration" id="add action column to seed_assignment" 12:42:04 policy-pap | [2024-04-25T12:40:07.593+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 85 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 12:42:04 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.732853839Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.488622ms 12:42:04 policy-pap | [2024-04-25T12:40:07.601+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.737316907Z level=info msg="Executing migration" id="add scope column to seed_assignment" 12:42:04 policy-pap | [2024-04-25T12:40:07.697+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 87 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.745788829Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.461851ms 12:42:04 policy-pap | [2024-04-25T12:40:07.704+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.751024987Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 12:42:04 policy-pap | [2024-04-25T12:40:07.800+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 89 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 12:42:04 policy-pap | [2024-04-25T12:40:07.810+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.752635849Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.612792ms 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:07.904+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 91 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.784370907Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 12:42:04 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.862882389Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=78.509952ms 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:07.914+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.867880286Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:08.008+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 93 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.868913389Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.035303ms 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:08.016+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.873722232Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:08.113+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 95 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.874730945Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.006613ms 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:08.121+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.884731887Z level=info msg="Executing migration" id="add primary key to seed_assigment" 12:42:04 policy-pap | [2024-04-25T12:40:08.217+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 97 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,997] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.905711723Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=20.978316ms 12:42:04 policy-pap | [2024-04-25T12:40:08.228+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.909011517Z level=info msg="Executing migration" id="add origin column to seed_assignment" 12:42:04 policy-pap | [2024-04-25T12:40:08.321+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 99 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 12:42:04 policy-db-migrator | > upgrade 0120-audit_sequence.sql 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.915502682Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.487495ms 12:42:04 policy-pap | [2024-04-25T12:40:08.333+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.920308406Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 12:42:04 policy-pap | [2024-04-25T12:40:08.425+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 101 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.92065881Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=352.964µs 12:42:04 policy-pap | [2024-04-25T12:40:08.436+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.926940413Z level=info msg="Executing migration" id="prevent seeding OnCall access" 12:42:04 policy-pap | [2024-04-25T12:40:08.536+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 103 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.927192336Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=254.333µs 12:42:04 policy-pap | [2024-04-25T12:40:08.538+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:03,998] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.931674265Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 12:42:04 policy-pap | [2024-04-25T12:40:08.639+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 102 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 12:42:04 kafka | [2024-04-25 12:40:03,999] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.932177182Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=504.687µs 12:42:04 policy-pap | [2024-04-25T12:40:08.642+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 105 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:03,999] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.938233941Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 12:42:04 policy-pap | [2024-04-25T12:40:08.744+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 104 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:04,036] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.938577546Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=347.235µs 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:08.744+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 107 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:04,047] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.94571675Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:08.846+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 109 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:04,048] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.946107465Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=394.035µs 12:42:04 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 12:42:04 policy-pap | [2024-04-25T12:40:08.848+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 106 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:04,049] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.951745199Z level=info msg="Executing migration" id="create folder table" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:08.948+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 108 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:04,050] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.953132527Z level=info msg="Migration successfully executed" id="create folder table" duration=1.389418ms 12:42:04 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 12:42:04 policy-pap | [2024-04-25T12:40:08.949+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 111 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:04,395] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.95709201Z level=info msg="Executing migration" id="Add index for parent_uid" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:09.052+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 113 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:04,396] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.958481858Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.390478ms 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:09.053+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 110 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:04,396] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.963392522Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:09.155+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 112 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:04,396] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.96472451Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.333108ms 12:42:04 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 12:42:04 policy-pap | [2024-04-25T12:40:09.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 115 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:04,396] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.969433712Z level=info msg="Executing migration" id="Update folder title length" 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:04,723] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.969482622Z level=info msg="Migration successfully executed" id="Update folder title length" duration=51.2µs 12:42:04 policy-pap | [2024-04-25T12:40:09.258+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 117 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:04,724] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.974997235Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 12:42:04 policy-pap | [2024-04-25T12:40:09.263+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 114 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:04,725] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.976355664Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.359909ms 12:42:04 policy-pap | [2024-04-25T12:40:09.362+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 119 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | TRUNCATE TABLE sequence 12:42:04 kafka | [2024-04-25 12:40:04,725] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.981215757Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 12:42:04 policy-pap | [2024-04-25T12:40:09.366+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 116 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:04,725] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.983126252Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.912245ms 12:42:04 policy-pap | [2024-04-25T12:40:09.468+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 118 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:05,183] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.989307873Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 12:42:04 policy-pap | [2024-04-25T12:40:09.471+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 121 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12:42:04 kafka | [2024-04-25 12:40:05,183] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.991262419Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.955616ms 12:42:04 policy-pap | [2024-04-25T12:40:09.572+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 123 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 12:42:04 kafka | [2024-04-25 12:40:05,183] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.998322692Z level=info msg="Executing migration" id="Sync dashboard and folder table" 12:42:04 policy-pap | [2024-04-25T12:40:09.573+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 120 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | -------------- 12:42:04 kafka | [2024-04-25 12:40:05,184] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:30.999275095Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=954.184µs 12:42:04 policy-pap | [2024-04-25T12:40:09.674+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 125 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 12:42:04 kafka | [2024-04-25 12:40:05,184] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.003392688Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:09.686+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 122 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:05,913] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.003981927Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=588.389µs 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:09.778+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 127 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:05,914] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.105012955Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:09.789+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 124 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:05,914] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.106044739Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.034794ms 12:42:04 policy-db-migrator | DROP TABLE pdpstatistics 12:42:04 policy-pap | [2024-04-25T12:40:09.881+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 129 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:05,914] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.110561378Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:09.892+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 126 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:05,914] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.11148378Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=920.402µs 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:09.984+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 131 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:06,577] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.115829947Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:09.994+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 128 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:06,578] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 12:42:04 policy-pap | [2024-04-25T12:40:10.088+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 133 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:06,578] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.117845873Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.015776ms 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:10.097+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 130 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:06,578] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.125686017Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 12:42:04 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 12:42:04 policy-pap | [2024-04-25T12:40:10.191+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 135 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:06,578] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.128297821Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.607844ms 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:10.199+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 132 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,129] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.132903082Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:10.294+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 137 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,130] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 12:42:04 policy-pap | [2024-04-25T12:40:10.302+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 134 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.134925968Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=2.019726ms 12:42:04 kafka | [2024-04-25 12:40:07,130] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 12:42:04 policy-pap | [2024-04-25T12:40:10.398+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 139 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.140346549Z level=info msg="Executing migration" id="create anon_device table" 12:42:04 kafka | [2024-04-25 12:40:07,130] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | -------------- 12:42:04 policy-pap | [2024-04-25T12:40:10.405+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 136 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,131] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | DROP TABLE statistics_sequence 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.141310833Z level=info msg="Migration successfully executed" id="create anon_device table" duration=964.134µs 12:42:04 policy-pap | [2024-04-25T12:40:10.501+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 141 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,840] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | -------------- 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.174996836Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 12:42:04 policy-pap | [2024-04-25T12:40:10.507+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 138 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,841] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.177487248Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.486621ms 12:42:04 policy-pap | [2024-04-25T12:40:10.605+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 143 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,841] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | policyadmin: OK: upgrade (1300) 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.183167482Z level=info msg="Executing migration" id="add index anon_device.updated_at" 12:42:04 policy-pap | [2024-04-25T12:40:10.611+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 140 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,841] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | name version 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.184763933Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.597071ms 12:42:04 policy-pap | [2024-04-25T12:40:10.708+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 145 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,841] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | policyadmin 1300 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.189922572Z level=info msg="Executing migration" id="create signing_key table" 12:42:04 policy-pap | [2024-04-25T12:40:10.714+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 142 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,976] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | ID script operation from_version to_version tag success atTime 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.191208818Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.285766ms 12:42:04 policy-pap | [2024-04-25T12:40:10.811+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 147 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,977] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:22 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.198034098Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 12:42:04 policy-pap | [2024-04-25T12:40:10.817+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 144 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,977] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.199252674Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.218906ms 12:42:04 policy-pap | [2024-04-25T12:40:10.914+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 149 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,977] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.203212777Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 12:42:04 policy-pap | [2024-04-25T12:40:10.920+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 146 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:07,977] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.204543514Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.330898ms 12:42:04 policy-pap | [2024-04-25T12:40:11.018+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 151 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:08,566] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.209385407Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 12:42:04 policy-pap | [2024-04-25T12:40:11.023+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 148 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:08,566] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.209759312Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=375.105µs 12:42:04 policy-pap | [2024-04-25T12:40:11.121+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 153 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:08,566] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.215253305Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 12:42:04 policy-pap | [2024-04-25T12:40:11.126+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 150 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:08,566] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.226010166Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.755771ms 12:42:04 policy-pap | [2024-04-25T12:40:11.229+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 155 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:08,566] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.231689881Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 12:42:04 kafka | [2024-04-25 12:40:08,691] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:11.230+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 152 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.233019178Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.334317ms 12:42:04 kafka | [2024-04-25 12:40:08,692] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:11.332+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 154 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.239217869Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 12:42:04 kafka | [2024-04-25 12:40:08,693] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:11.333+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 157 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.241368087Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=2.150598ms 12:42:04 kafka | [2024-04-25 12:40:08,693] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:11.434+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 159 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.245683615Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 12:42:04 kafka | [2024-04-25 12:40:08,693] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:11.437+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 156 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.247703082Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.019696ms 12:42:04 kafka | [2024-04-25 12:40:09,232] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:11.537+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 158 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.251488681Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 12:42:04 kafka | [2024-04-25 12:40:09,233] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:11.539+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 161 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:23 12:42:04 kafka | [2024-04-25 12:40:09,233] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:11.640+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 163 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.252585925Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.097864ms 12:42:04 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 policy-pap | [2024-04-25T12:40:11.640+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 160 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.26059158Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 12:42:04 kafka | [2024-04-25 12:40:09,233] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.262413075Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.821145ms 12:42:04 kafka | [2024-04-25 12:40:09,234] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:11.644+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 12:42:04 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:09,764] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:11.644+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.269573978Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 12:42:04 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:09,765] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:11.646+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.270728134Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.154306ms 12:42:04 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 policy-pap | [2024-04-25T12:40:11.741+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 162 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.276858494Z level=info msg="Executing migration" id="create sso_setting table" 12:42:04 kafka | [2024-04-25 12:40:09,765] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:11.742+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 165 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.278429664Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.57083ms 12:42:04 kafka | [2024-04-25 12:40:09,765] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 policy-pap | [2024-04-25T12:40:11.844+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 167 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.284606497Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 12:42:04 kafka | [2024-04-25 12:40:09,765] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 policy-pap | [2024-04-25T12:40:11.846+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 164 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.285718321Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.112844ms 12:42:04 kafka | [2024-04-25 12:40:10,493] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 policy-pap | [2024-04-25T12:40:11.948+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 169 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.28947242Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 12:42:04 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:10,494] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:11.949+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 166 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.289945487Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=474.797µs 12:42:04 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:10,494] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:12.052+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 168 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.340500102Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 12:42:04 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:10,494] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:12.054+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 171 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.340621713Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=122.802µs 12:42:04 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:10,494] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:12.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 173 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.348177902Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 12:42:04 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:11,262] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:12.160+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 170 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.359019484Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.839512ms 12:42:04 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:11,262] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:12.260+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 175 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.364308684Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 12:42:04 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:24 12:42:04 kafka | [2024-04-25 12:40:11,262] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:12.268+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 172 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.371156484Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.84764ms 12:42:04 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:11,262] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:12.364+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 177 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.375135647Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 12:42:04 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:11,262] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:12.371+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 174 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.375492051Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=354.914µs 12:42:04 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:11,757] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:12.468+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 179 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=migrator t=2024-04-25T12:39:31.3860134Z level=info msg="migrations completed" performed=548 skipped=0 duration=8.394833696s 12:42:04 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:11,758] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:12.473+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 176 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=sqlstore t=2024-04-25T12:39:31.398178949Z level=info msg="Created default admin" user=admin 12:42:04 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:11,758] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:12.570+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 181 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=sqlstore t=2024-04-25T12:39:31.398456313Z level=info msg="Created default organization" 12:42:04 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:11,758] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:12.577+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 178 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=secrets t=2024-04-25T12:39:31.40273188Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 12:42:04 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:11,758] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=plugin.store t=2024-04-25T12:39:31.4218028Z level=info msg="Loading plugins..." 12:42:04 policy-pap | [2024-04-25T12:40:12.674+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 183 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:12,084] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=local.finder t=2024-04-25T12:39:31.462594686Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 12:42:04 policy-pap | [2024-04-25T12:40:12.681+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 180 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:12,085] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=plugin.store t=2024-04-25T12:39:31.462620046Z level=info msg="Plugins loaded" count=55 duration=40.817816ms 12:42:04 policy-pap | [2024-04-25T12:40:12.776+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 185 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:12,085] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 12:42:04 grafana | logger=query_data t=2024-04-25T12:39:31.465251441Z level=info msg="Query Service initialization" 12:42:04 policy-pap | [2024-04-25T12:40:12.783+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 182 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:12,086] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=live.push_http t=2024-04-25T12:39:31.472439496Z level=info msg="Live Push Gateway initialization" 12:42:04 policy-pap | [2024-04-25T12:40:12.879+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 187 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:12,086] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=ngalert.migration t=2024-04-25T12:39:31.536471908Z level=info msg=Starting 12:42:04 policy-pap | [2024-04-25T12:40:12.885+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 184 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:12,191] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=ngalert.migration t=2024-04-25T12:39:31.537286959Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 12:42:04 policy-pap | [2024-04-25T12:40:12.982+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 189 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:12,192] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=ngalert.migration orgID=1 t=2024-04-25T12:39:31.53816154Z level=info msg="Migrating alerts for organisation" 12:42:04 policy-pap | [2024-04-25T12:40:12.987+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 186 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:25 12:42:04 kafka | [2024-04-25 12:40:12,192] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 12:42:04 grafana | logger=ngalert.migration orgID=1 t=2024-04-25T12:39:31.539380346Z level=info msg="Alerts found to migrate" alerts=0 12:42:04 policy-pap | [2024-04-25T12:40:13.086+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 191 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,192] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=ngalert.migration t=2024-04-25T12:39:31.542630099Z level=info msg="Completed alerting migration" 12:42:04 policy-pap | [2024-04-25T12:40:13.092+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 188 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,193] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=ngalert.state.manager t=2024-04-25T12:39:31.581652832Z level=info msg="Running in alternative execution of Error/NoData mode" 12:42:04 policy-pap | [2024-04-25T12:40:13.189+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 193 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,227] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=infra.usagestats.collector t=2024-04-25T12:39:31.583393844Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 12:42:04 policy-pap | [2024-04-25T12:40:13.195+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 190 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,228] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=provisioning.datasources t=2024-04-25T12:39:31.585647374Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 12:42:04 policy-pap | [2024-04-25T12:40:13.291+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 195 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,228] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 12:42:04 grafana | logger=provisioning.alerting t=2024-04-25T12:39:31.598573374Z level=info msg="starting to provision alerting" 12:42:04 policy-pap | [2024-04-25T12:40:13.297+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 192 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,229] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=provisioning.alerting t=2024-04-25T12:39:31.598588285Z level=info msg="finished to provision alerting" 12:42:04 policy-pap | [2024-04-25T12:40:13.394+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 197 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,229] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=ngalert.state.manager t=2024-04-25T12:39:31.598689406Z level=info msg="Warming state cache for startup" 12:42:04 policy-pap | [2024-04-25T12:40:13.400+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 194 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,335] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=ngalert.state.manager t=2024-04-25T12:39:31.598901619Z level=info msg="State cache has been initialized" states=0 duration=210.143µs 12:42:04 policy-pap | [2024-04-25T12:40:13.498+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 199 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,337] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 grafana | logger=grafanaStorageLogger t=2024-04-25T12:39:31.598964079Z level=info msg="Storage starting" 12:42:04 policy-pap | [2024-04-25T12:40:13.503+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 196 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,337] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 12:42:04 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-25T12:39:31.600223315Z level=info msg="Starting MultiOrg Alertmanager" 12:42:04 policy-pap | [2024-04-25T12:40:13.602+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 201 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,337] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 grafana | logger=ngalert.scheduler t=2024-04-25T12:39:31.600273146Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 12:42:04 policy-pap | [2024-04-25T12:40:13.606+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 198 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,338] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 grafana | logger=ticker t=2024-04-25T12:39:31.600460858Z level=info msg=starting first_tick=2024-04-25T12:39:40Z 12:42:04 policy-pap | [2024-04-25T12:40:13.704+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 203 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 kafka | [2024-04-25 12:40:12,550] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 grafana | logger=http.server t=2024-04-25T12:39:31.603037203Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 12:42:04 policy-pap | [2024-04-25T12:40:13.709+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 200 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 grafana | logger=provisioning.dashboard t=2024-04-25T12:39:31.689025333Z level=info msg="starting to provision dashboards" 12:42:04 kafka | [2024-04-25 12:40:12,551] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:13.807+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 205 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 grafana | logger=plugins.update.checker t=2024-04-25T12:39:31.706375811Z level=info msg="Update check succeeded" duration=95.234932ms 12:42:04 kafka | [2024-04-25 12:40:12,551] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:13.813+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 202 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:26 12:42:04 grafana | logger=grafana.update.checker t=2024-04-25T12:39:31.718030675Z level=info msg="Update check succeeded" duration=105.919023ms 12:42:04 kafka | [2024-04-25 12:40:12,552] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:13.908+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 207 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=sqlstore.transactions t=2024-04-25T12:39:31.803072313Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 12:42:04 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:27 12:42:04 kafka | [2024-04-25 12:40:12,552] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:13.915+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 204 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=grafana-apiserver t=2024-04-25T12:39:32.17256562Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 12:42:04 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:27 12:42:04 kafka | [2024-04-25 12:40:12,652] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:14.012+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 209 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=grafana-apiserver t=2024-04-25T12:39:32.173112537Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 12:42:04 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 kafka | [2024-04-25 12:40:12,653] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:14.017+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 206 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=provisioning.dashboard t=2024-04-25T12:39:32.841126072Z level=info msg="finished to provision dashboards" 12:42:04 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 kafka | [2024-04-25 12:40:12,653] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:14.114+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 211 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 grafana | logger=infra.usagestats t=2024-04-25T12:40:01.607204604Z level=info msg="Usage stats are ready to report" 12:42:04 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 kafka | [2024-04-25 12:40:12,653] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:14.120+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 208 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 kafka | [2024-04-25 12:40:12,654] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.217+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 213 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:12,802] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.223+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 210 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:12,803] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.320+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 215 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:12,803] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.326+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 212 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:12,803] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.423+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 217 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:12,804] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.429+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 214 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,065] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.526+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 219 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,065] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.531+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 216 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,065] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.629+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 221 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,066] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 policy-pap | [2024-04-25T12:40:14.634+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 218 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,066] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:28 12:42:04 kafka | [2024-04-25 12:40:13,300] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:14.733+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 223 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,301] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:14.737+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 220 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,301] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:14.835+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 225 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,301] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:14.841+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 222 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,301] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:14.939+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 227 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,775] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:14.944+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 224 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,776] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:15.043+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 229 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,776] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:15.047+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 226 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,776] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:15.145+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 231 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,776] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:15.149+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 228 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,867] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:29 12:42:04 policy-pap | [2024-04-25T12:40:15.248+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 233 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,868] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.252+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 230 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,868] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.351+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 235 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,868] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.355+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 232 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,868] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.455+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 237 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.459+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 234 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,882] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2504241239220800u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.558+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 239 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,883] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.561+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 236 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,883] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.660+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 241 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,883] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 12:42:04 policy-pap | [2024-04-25T12:40:15.664+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 238 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:13,884] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 12:42:04 kafka | [2024-04-25 12:40:13,955] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:15.763+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 243 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 12:42:04 kafka | [2024-04-25 12:40:13,956] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:15.767+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 240 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:30 12:42:04 kafka | [2024-04-25 12:40:13,956] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:15.867+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 245 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:13,956] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:15.869+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 242 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:13,957] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:15.969+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 247 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,283] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:15.973+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 244 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,284] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:16.071+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 250 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,285] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:16.074+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 247 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,285] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:16.173+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 252 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2504241239220900u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,285] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:16.176+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 249 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,315] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:16.276+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 254 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,316] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:16.279+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 251 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,316] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:16.380+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 256 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,317] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:16.385+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 253 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,317] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,335] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:16.483+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 258 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:31 12:42:04 kafka | [2024-04-25 12:40:14,336] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:16.488+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 255 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:32 12:42:04 kafka | [2024-04-25 12:40:14,336] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:16.585+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 260 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2504241239221000u 1 2024-04-25 12:39:32 12:42:04 kafka | [2024-04-25 12:40:14,337] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:16.590+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 257 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2504241239221100u 1 2024-04-25 12:39:32 12:42:04 kafka | [2024-04-25 12:40:14,337] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:16.688+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 262 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2504241239221200u 1 2024-04-25 12:39:33 12:42:04 kafka | [2024-04-25 12:40:14,632] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:16.693+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 259 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2504241239221200u 1 2024-04-25 12:39:33 12:42:04 kafka | [2024-04-25 12:40:14,633] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:16.790+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 264 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2504241239221200u 1 2024-04-25 12:39:33 12:42:04 policy-pap | [2024-04-25T12:40:16.795+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 261 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2504241239221200u 1 2024-04-25 12:39:33 12:42:04 kafka | [2024-04-25 12:40:14,634] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:16.894+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 266 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2504241239221300u 1 2024-04-25 12:39:33 12:42:04 kafka | [2024-04-25 12:40:14,634] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:16.898+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 263 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2504241239221300u 1 2024-04-25 12:39:33 12:42:04 kafka | [2024-04-25 12:40:14,634] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:16.996+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 268 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2504241239221300u 1 2024-04-25 12:39:33 12:42:04 kafka | [2024-04-25 12:40:14,842] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:16.999+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 265 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-db-migrator | policyadmin: OK @ 1300 12:42:04 kafka | [2024-04-25 12:40:14,843] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:17.099+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 270 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:14,844] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:17.102+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 267 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:14,844] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:17.203+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 272 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:14,844] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:17.205+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 269 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:17.306+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 274 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,124] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:17.308+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 271 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,125] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:17.408+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 276 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:15,125] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:17.411+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 273 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,126] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:17.511+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 278 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,126] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(HOyl9LomSW2VRWzaH4p5QQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:17.515+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 275 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,219] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:17.612+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 280 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:15,220] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:17.617+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 277 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:17.715+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 282 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,220] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:17.720+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 279 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,220] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:17.815+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 284 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:15,220] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:17.824+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 281 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,702] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:17.918+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 286 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,702] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:17.926+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 283 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,703] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:18.020+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 288 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:15,703] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:18.027+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 285 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,703] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:18.124+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 290 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,853] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:18.130+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 287 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,853] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:18.227+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 292 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,854] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:18.233+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 289 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,854] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:18.331+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 294 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:15,854] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:18.335+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 291 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:16,004] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:18.433+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 296 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 kafka | [2024-04-25 12:40:16,005] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:18.445+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 293 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:16,005] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:18.535+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 298 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:16,006] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:18.547+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 295 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:16,006] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 policy-pap | [2024-04-25T12:40:18.637+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 300 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:16,239] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 policy-pap | [2024-04-25T12:40:18.649+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 297 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:16,239] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 policy-pap | [2024-04-25T12:40:18.741+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 302 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 kafka | [2024-04-25 12:40:16,239] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:16,239] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:16,240] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:16,708] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:16,709] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:16,709] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:16,710] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:16,710] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:16,787] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:16,788] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:16,788] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:16,788] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:16,788] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:17,214] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:17,215] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:17,215] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,216] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,216] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:17,506] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:17,508] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:17,508] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,508] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,509] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:17,691] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:17,691] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:17,691] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,691] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,692] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:17,819] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:17,820] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:17,821] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,821] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,821] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:17,991] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:17,992] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:17,992] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,992] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:17,992] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:18,294] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:18,295] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:18,295] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:18,295] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 policy-pap | [2024-04-25T12:40:18.751+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 299 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:18.844+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 304 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:18.853+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 301 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:18.946+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 306 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:18.955+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 303 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:19.050+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 308 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:19.055+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 305 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 12:42:04 policy-pap | [2024-04-25T12:40:19.154+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Error while fetching metadata with correlation id 310 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:19.159+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 307 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 12:42:04 policy-pap | [2024-04-25T12:40:19.267+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 12:42:04 policy-pap | [2024-04-25T12:40:19.268+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 12:42:04 policy-pap | [2024-04-25T12:40:19.276+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] (Re-)joining group 12:42:04 policy-pap | [2024-04-25T12:40:19.276+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 12:42:04 policy-pap | [2024-04-25T12:40:19.298+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5 12:42:04 policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 12:42:04 policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 12:42:04 policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Request joining group due to: need to re-join with the given member-id: consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836 12:42:04 policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 12:42:04 policy-pap | [2024-04-25T12:40:19.299+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] (Re-)joining group 12:42:04 policy-pap | [2024-04-25T12:40:22.323+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5', protocol='range'} 12:42:04 policy-pap | [2024-04-25T12:40:22.325+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Successfully joined group with generation Generation{generationId=1, memberId='consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836', protocol='range'} 12:42:04 policy-pap | [2024-04-25T12:40:22.334+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Finished assignment for group at generation 1: {consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836=Assignment(partitions=[policy-pdp-pap-0])} 12:42:04 policy-pap | [2024-04-25T12:40:22.334+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5=Assignment(partitions=[policy-pdp-pap-0])} 12:42:04 policy-pap | [2024-04-25T12:40:22.365+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Successfully synced group in generation Generation{generationId=1, memberId='consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836', protocol='range'} 12:42:04 policy-pap | [2024-04-25T12:40:22.365+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 12:42:04 policy-pap | [2024-04-25T12:40:22.366+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5', protocol='range'} 12:42:04 policy-pap | [2024-04-25T12:40:22.366+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 12:42:04 policy-pap | [2024-04-25T12:40:22.369+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Adding newly assigned partitions: policy-pdp-pap-0 12:42:04 policy-pap | [2024-04-25T12:40:22.369+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 12:42:04 policy-pap | [2024-04-25T12:40:22.390+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Found no committed offset for partition policy-pdp-pap-0 12:42:04 policy-pap | [2024-04-25T12:40:22.390+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 12:42:04 policy-pap | [2024-04-25T12:40:22.408+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-53d3b957-3026-4843-bc4f-55d426241089-3, groupId=53d3b957-3026-4843-bc4f-55d426241089] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 12:42:04 policy-pap | [2024-04-25T12:40:22.408+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 12:42:04 policy-pap | [2024-04-25T12:40:22.905+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 12:42:04 policy-pap | [] 12:42:04 policy-pap | [2024-04-25T12:40:22.906+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a408b809-2a21-46db-ba3c-dbdbae06aca1","timestampMs":1714048822843,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 policy-pap | [2024-04-25T12:40:22.908+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:42:04 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a408b809-2a21-46db-ba3c-dbdbae06aca1","timestampMs":1714048822843,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 policy-pap | [2024-04-25T12:40:22.915+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 12:42:04 policy-pap | [2024-04-25T12:40:23.366+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting 12:42:04 policy-pap | [2024-04-25T12:40:23.366+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting listener 12:42:04 policy-pap | [2024-04-25T12:40:23.366+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting timer 12:42:04 policy-pap | [2024-04-25T12:40:23.367+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30, expireMs=1714048853367] 12:42:04 policy-pap | [2024-04-25T12:40:23.368+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30, expireMs=1714048853367] 12:42:04 policy-pap | [2024-04-25T12:40:23.368+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting enqueue 12:42:04 policy-pap | [2024-04-25T12:40:23.369+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate started 12:42:04 policy-pap | [2024-04-25T12:40:23.371+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","timestampMs":1714048823348,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.403+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","timestampMs":1714048823348,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.403+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","timestampMs":1714048823348,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.403+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 12:42:04 policy-pap | [2024-04-25T12:40:23.404+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 12:42:04 policy-pap | [2024-04-25T12:40:23.427+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"12f5c5ff-7241-48ae-ba7d-84cdf580311e","timestampMs":1714048823410,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 policy-pap | [2024-04-25T12:40:23.428+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 12:42:04 policy-pap | [2024-04-25T12:40:23.430+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:42:04 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"12f5c5ff-7241-48ae-ba7d-84cdf580311e","timestampMs":1714048823410,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup"} 12:42:04 policy-pap | [2024-04-25T12:40:23.435+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"896f628b-2422-4a7a-9645-8164618b395e","timestampMs":1714048823411,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.620+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping 12:42:04 policy-pap | [2024-04-25T12:40:23.620+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping enqueue 12:42:04 policy-pap | [2024-04-25T12:40:23.620+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping timer 12:42:04 policy-pap | [2024-04-25T12:40:23.621+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30, expireMs=1714048853367] 12:42:04 policy-pap | [2024-04-25T12:40:23.621+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping listener 12:42:04 policy-pap | [2024-04-25T12:40:23.621+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopped 12:42:04 policy-pap | [2024-04-25T12:40:23.624+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:42:04 kafka | [2024-04-25 12:40:18,295] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:18,409] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:18,410] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:18,410] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:18,410] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:18,410] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:18,585] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:18,586] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:18,586] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:18,586] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:18,586] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:18,826] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 12:42:04 kafka | [2024-04-25 12:40:18,826] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 12:42:04 kafka | [2024-04-25 12:40:18,827] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:18,827] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 12:42:04 kafka | [2024-04-25 12:40:18,827] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(hlyPC_3zQpGmePqsd4AOeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,169] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,170] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,177] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,179] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 4 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"896f628b-2422-4a7a-9645-8164618b395e","timestampMs":1714048823411,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.626+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30 12:42:04 policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate successful 12:42:04 policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 start publishing next request 12:42:04 policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange starting 12:42:04 policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange starting listener 12:42:04 policy-pap | [2024-04-25T12:40:23.628+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange starting timer 12:42:04 policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=9c94d082-5dc3-41dd-b822-97664ab4caac, expireMs=1714048853629] 12:42:04 policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange starting enqueue 12:42:04 policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange started 12:42:04 policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=9c94d082-5dc3-41dd-b822-97664ab4caac, expireMs=1714048853629] 12:42:04 policy-pap | [2024-04-25T12:40:23.629+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9c94d082-5dc3-41dd-b822-97664ab4caac","timestampMs":1714048823350,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.645+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9c94d082-5dc3-41dd-b822-97664ab4caac","timestampMs":1714048823350,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.645+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 12:42:04 policy-pap | [2024-04-25T12:40:23.653+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:42:04 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9c94d082-5dc3-41dd-b822-97664ab4caac","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bc2bc4b3-e57a-4454-b7dc-ad8eea338f0c","timestampMs":1714048823643,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.654+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9c94d082-5dc3-41dd-b822-97664ab4caac 12:42:04 policy-pap | [2024-04-25T12:40:23.677+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9c94d082-5dc3-41dd-b822-97664ab4caac","timestampMs":1714048823350,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.677+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9c94d082-5dc3-41dd-b822-97664ab4caac","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"bc2bc4b3-e57a-4454-b7dc-ad8eea338f0c","timestampMs":1714048823643,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopping 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopping enqueue 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopping timer 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=9c94d082-5dc3-41dd-b822-97664ab4caac, expireMs=1714048853629] 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopping listener 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange stopped 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpStateChange successful 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 start publishing next request 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting listener 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting timer 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=7c8ff35c-cd2f-465e-9c85-bcb76f083b98, expireMs=1714048853681] 12:42:04 policy-pap | [2024-04-25T12:40:23.681+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate starting enqueue 12:42:04 policy-pap | [2024-04-25T12:40:23.682+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","timestampMs":1714048823667,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.683+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate started 12:42:04 policy-pap | [2024-04-25T12:40:23.696+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","timestampMs":1714048823667,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.697+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,184] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,185] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,186] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 policy-pap | [2024-04-25T12:40:23.698+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:42:04 policy-pap | {"source":"pap-480dd379-a703-49b2-b4a9-c44e36969f38","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","timestampMs":1714048823667,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.698+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 12:42:04 policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 12:42:04 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9d346855-4bc1-4b25-8a3b-1b11512efc29","timestampMs":1714048823692,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping 12:42:04 policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping enqueue 12:42:04 policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping timer 12:42:04 policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=7c8ff35c-cd2f-465e-9c85-bcb76f083b98, expireMs=1714048853681] 12:42:04 policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopping listener 12:42:04 policy-pap | [2024-04-25T12:40:23.707+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate stopped 12:42:04 policy-pap | [2024-04-25T12:40:23.709+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 12:42:04 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"7c8ff35c-cd2f-465e-9c85-bcb76f083b98","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"9d346855-4bc1-4b25-8a3b-1b11512efc29","timestampMs":1714048823692,"name":"apex-c1762bbf-462b-4754-b2e2-2796b5f05a40","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 12:42:04 policy-pap | [2024-04-25T12:40:23.710+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 7c8ff35c-cd2f-465e-9c85-bcb76f083b98 12:42:04 policy-pap | [2024-04-25T12:40:23.712+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 PdpUpdate successful 12:42:04 policy-pap | [2024-04-25T12:40:23.712+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-c1762bbf-462b-4754-b2e2-2796b5f05a40 has no more requests 12:42:04 policy-pap | [2024-04-25T12:40:32.103+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 12:42:04 policy-pap | [2024-04-25T12:40:32.150+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 12:42:04 policy-pap | [2024-04-25T12:40:32.160+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 12:42:04 policy-pap | [2024-04-25T12:40:32.161+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 12:42:04 policy-pap | [2024-04-25T12:40:32.594+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:33.145+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:33.147+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:33.661+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:33.910+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 12:42:04 policy-pap | [2024-04-25T12:40:34.056+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 12:42:04 policy-pap | [2024-04-25T12:40:34.057+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:34.057+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:34.085+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T12:40:33Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-25T12:40:34Z, user=policyadmin)] 12:42:04 policy-pap | [2024-04-25T12:40:34.785+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:34.786+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 12:42:04 policy-pap | [2024-04-25T12:40:34.786+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 12:42:04 policy-pap | [2024-04-25T12:40:34.786+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:34.787+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:34.796+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T12:40:34Z, user=policyadmin)] 12:42:04 policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 12:42:04 policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 12:42:04 policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 12:42:04 policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:35.158+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:35.194+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-25T12:40:35Z, user=policyadmin)] 12:42:04 policy-pap | [2024-04-25T12:40:53.368+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c93c9c10-4bcb-4ba5-b3ea-1a9726df0e30, expireMs=1714048853367] 12:42:04 policy-pap | [2024-04-25T12:40:53.629+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=9c94d082-5dc3-41dd-b822-97664ab4caac, expireMs=1714048853629] 12:42:04 policy-pap | [2024-04-25T12:40:55.908+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 12:42:04 policy-pap | [2024-04-25T12:40:55.910+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 12:42:04 policy-pap | [2024-04-25T12:42:01.116+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,187] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,188] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,189] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,190] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 12:42:04 kafka | [2024-04-25 12:40:19,192] INFO [Broker id=1] Finished LeaderAndIsr request in 15241ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,195] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=hlyPC_3zQpGmePqsd4AOeA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=HOyl9LomSW2VRWzaH4p5QQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,200] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,201] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,202] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,203] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,204] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,205] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 12:42:04 kafka | [2024-04-25 12:40:19,292] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 53d3b957-3026-4843-bc4f-55d426241089 in Empty state. Created a new member id consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,292] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 in Empty state. Created a new member id consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,292] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,308] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,308] INFO [GroupCoordinator 1]: Preparing to rebalance group 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 in state PreparingRebalance with old generation 0 (__consumer_offsets-30) (reason: Adding new member consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:19,308] INFO [GroupCoordinator 1]: Preparing to rebalance group 53d3b957-3026-4843-bc4f-55d426241089 in state PreparingRebalance with old generation 0 (__consumer_offsets-1) (reason: Adding new member consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:22,321] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:22,324] INFO [GroupCoordinator 1]: Stabilized group 53d3b957-3026-4843-bc4f-55d426241089 generation 1 (__consumer_offsets-1) with 1 members (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:22,325] INFO [GroupCoordinator 1]: Stabilized group 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 generation 1 (__consumer_offsets-30) with 1 members (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:22,347] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-8984bd6d-ba2b-4123-8965-111129945dd5 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:22,347] INFO [GroupCoordinator 1]: Assignment received from leader consumer-53d3b957-3026-4843-bc4f-55d426241089-3-ecd690b6-cba2-4ec7-bd80-418107943836 for group 53d3b957-3026-4843-bc4f-55d426241089 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 12:42:04 kafka | [2024-04-25 12:40:22,347] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4b79aeb3-604a-4e33-80d9-cdeedf19ce63-2-52209b4a-6d81-4373-80ef-9ff30791323e for group 4b79aeb3-604a-4e33-80d9-cdeedf19ce63 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 12:42:04 ++ echo 'Tearing down containers...' 12:42:04 Tearing down containers... 12:42:04 ++ docker-compose down -v --remove-orphans 12:42:04 Stopping policy-apex-pdp ... 12:42:04 Stopping policy-pap ... 12:42:04 Stopping grafana ... 12:42:04 Stopping kafka ... 12:42:04 Stopping policy-api ... 12:42:04 Stopping simulator ... 12:42:04 Stopping mariadb ... 12:42:04 Stopping prometheus ... 12:42:04 Stopping zookeeper ... 12:42:07 Stopping grafana ... done 12:42:09 Stopping prometheus ... done 12:42:15 Stopping policy-apex-pdp ... done 12:42:25 Stopping simulator ... done 12:42:25 Stopping policy-pap ... done 12:42:26 Stopping mariadb ... done 12:42:26 Stopping kafka ... done 12:42:27 Stopping zookeeper ... done 12:42:36 Stopping policy-api ... done 12:42:36 Removing policy-apex-pdp ... 12:42:36 Removing policy-pap ... 12:42:36 Removing grafana ... 12:42:36 Removing kafka ... 12:42:36 Removing policy-api ... 12:42:36 Removing policy-db-migrator ... 12:42:36 Removing simulator ... 12:42:36 Removing mariadb ... 12:42:36 Removing prometheus ... 12:42:36 Removing zookeeper ... 12:42:36 Removing policy-db-migrator ... done 12:42:36 Removing policy-apex-pdp ... done 12:42:36 Removing mariadb ... done 12:42:36 Removing policy-api ... done 12:42:36 Removing grafana ... done 12:42:36 Removing policy-pap ... done 12:42:36 Removing simulator ... done 12:42:36 Removing kafka ... done 12:42:36 Removing prometheus ... done 12:42:36 Removing zookeeper ... done 12:42:36 Removing network compose_default 12:42:36 ++ cd /w/workspace/policy-pap-master-project-csit-pap 12:42:36 + load_set 12:42:36 + _setopts=hxB 12:42:36 ++ echo braceexpand:hashall:interactive-comments:xtrace 12:42:36 ++ tr : ' ' 12:42:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:42:36 + set +o braceexpand 12:42:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:42:36 + set +o hashall 12:42:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:42:36 + set +o interactive-comments 12:42:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 12:42:36 + set +o xtrace 12:42:36 ++ echo hxB 12:42:36 ++ sed 's/./& /g' 12:42:36 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:42:36 + set +h 12:42:36 + for i in $(echo "$_setopts" | sed 's/./& /g') 12:42:36 + set +x 12:42:36 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 12:42:36 + [[ -n /tmp/tmp.IfKGrR3aFZ ]] 12:42:36 + rsync -av /tmp/tmp.IfKGrR3aFZ/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 12:42:36 sending incremental file list 12:42:36 ./ 12:42:36 log.html 12:42:36 output.xml 12:42:36 report.html 12:42:36 testplan.txt 12:42:36 12:42:36 sent 918,707 bytes received 95 bytes 1,837,604.00 bytes/sec 12:42:36 total size is 918,161 speedup is 1.00 12:42:36 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 12:42:36 + exit 1 12:42:36 Build step 'Execute shell' marked build as failure 12:42:36 $ ssh-agent -k 12:42:36 unset SSH_AUTH_SOCK; 12:42:36 unset SSH_AGENT_PID; 12:42:36 echo Agent pid 2080 killed; 12:42:36 [ssh-agent] Stopped. 12:42:36 Robot results publisher started... 12:42:36 INFO: Checking test criticality is deprecated and will be dropped in a future release! 12:42:36 -Parsing output xml: 12:42:37 Done! 12:42:37 WARNING! Could not find file: **/log.html 12:42:37 WARNING! Could not find file: **/report.html 12:42:37 -Copying log files to build dir: 12:42:37 Done! 12:42:37 -Assigning results to build: 12:42:37 Done! 12:42:37 -Checking thresholds: 12:42:37 Done! 12:42:37 Done publishing Robot results. 12:42:37 [PostBuildScript] - [INFO] Executing post build scripts. 12:42:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2438396946621969292.sh 12:42:37 ---> sysstat.sh 12:42:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins336890825422044275.sh 12:42:37 ---> package-listing.sh 12:42:37 ++ facter osfamily 12:42:37 ++ tr '[:upper:]' '[:lower:]' 12:42:37 + OS_FAMILY=debian 12:42:37 + workspace=/w/workspace/policy-pap-master-project-csit-pap 12:42:37 + START_PACKAGES=/tmp/packages_start.txt 12:42:37 + END_PACKAGES=/tmp/packages_end.txt 12:42:37 + DIFF_PACKAGES=/tmp/packages_diff.txt 12:42:37 + PACKAGES=/tmp/packages_start.txt 12:42:37 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 12:42:37 + PACKAGES=/tmp/packages_end.txt 12:42:37 + case "${OS_FAMILY}" in 12:42:37 + dpkg -l 12:42:37 + grep '^ii' 12:42:37 + '[' -f /tmp/packages_start.txt ']' 12:42:37 + '[' -f /tmp/packages_end.txt ']' 12:42:37 + diff /tmp/packages_start.txt /tmp/packages_end.txt 12:42:37 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 12:42:37 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 12:42:37 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 12:42:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13023940649407330641.sh 12:42:37 ---> capture-instance-metadata.sh 12:42:38 Setup pyenv: 12:42:38 system 12:42:38 3.8.13 12:42:38 3.9.13 12:42:38 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 12:42:38 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-saub from file:/tmp/.os_lf_venv 12:42:39 lf-activate-venv(): INFO: Installing: lftools 12:42:49 lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH 12:42:49 INFO: Running in OpenStack, capturing instance metadata 12:42:50 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10993113376887587068.sh 12:42:50 provisioning config files... 12:42:50 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config16576222324122791650tmp 12:42:50 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 12:42:50 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 12:42:50 [EnvInject] - Injecting environment variables from a build step. 12:42:50 [EnvInject] - Injecting as environment variables the properties content 12:42:50 SERVER_ID=logs 12:42:50 12:42:50 [EnvInject] - Variables injected successfully. 12:42:50 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13937674948363116199.sh 12:42:50 ---> create-netrc.sh 12:42:50 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3155689500288050888.sh 12:42:50 ---> python-tools-install.sh 12:42:50 Setup pyenv: 12:42:50 system 12:42:50 3.8.13 12:42:50 3.9.13 12:42:50 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 12:42:50 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-saub from file:/tmp/.os_lf_venv 12:42:51 lf-activate-venv(): INFO: Installing: lftools 12:43:00 lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH 12:43:00 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16385538406870033277.sh 12:43:00 ---> sudo-logs.sh 12:43:00 Archiving 'sudo' log.. 12:43:00 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16264651673967317711.sh 12:43:00 ---> job-cost.sh 12:43:00 Setup pyenv: 12:43:00 system 12:43:00 3.8.13 12:43:00 3.9.13 12:43:00 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 12:43:00 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-saub from file:/tmp/.os_lf_venv 12:43:02 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 12:43:07 lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH 12:43:07 INFO: No Stack... 12:43:07 INFO: Retrieving Pricing Info for: v3-standard-8 12:43:07 INFO: Archiving Costs 12:43:07 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins8743828596054953600.sh 12:43:07 ---> logs-deploy.sh 12:43:07 Setup pyenv: 12:43:07 system 12:43:07 3.8.13 12:43:07 3.9.13 12:43:07 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 12:43:08 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-saub from file:/tmp/.os_lf_venv 12:43:09 lf-activate-venv(): INFO: Installing: lftools 12:43:18 lf-activate-venv(): INFO: Adding /tmp/venv-saub/bin to PATH 12:43:18 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1662 12:43:18 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 12:43:19 Archives upload complete. 12:43:20 INFO: archiving logs to Nexus 12:43:20 ---> uname -a: 12:43:20 Linux prd-ubuntu1804-docker-8c-8g-26122 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 12:43:20 12:43:20 12:43:20 ---> lscpu: 12:43:20 Architecture: x86_64 12:43:20 CPU op-mode(s): 32-bit, 64-bit 12:43:20 Byte Order: Little Endian 12:43:20 CPU(s): 8 12:43:20 On-line CPU(s) list: 0-7 12:43:20 Thread(s) per core: 1 12:43:20 Core(s) per socket: 1 12:43:20 Socket(s): 8 12:43:20 NUMA node(s): 1 12:43:20 Vendor ID: AuthenticAMD 12:43:20 CPU family: 23 12:43:20 Model: 49 12:43:20 Model name: AMD EPYC-Rome Processor 12:43:20 Stepping: 0 12:43:20 CPU MHz: 2799.998 12:43:20 BogoMIPS: 5599.99 12:43:20 Virtualization: AMD-V 12:43:20 Hypervisor vendor: KVM 12:43:20 Virtualization type: full 12:43:20 L1d cache: 32K 12:43:20 L1i cache: 32K 12:43:20 L2 cache: 512K 12:43:20 L3 cache: 16384K 12:43:20 NUMA node0 CPU(s): 0-7 12:43:20 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 12:43:20 12:43:20 12:43:20 ---> nproc: 12:43:20 8 12:43:20 12:43:20 12:43:20 ---> df -h: 12:43:20 Filesystem Size Used Avail Use% Mounted on 12:43:20 udev 16G 0 16G 0% /dev 12:43:20 tmpfs 3.2G 708K 3.2G 1% /run 12:43:20 /dev/vda1 155G 14G 142G 9% / 12:43:20 tmpfs 16G 0 16G 0% /dev/shm 12:43:20 tmpfs 5.0M 0 5.0M 0% /run/lock 12:43:20 tmpfs 16G 0 16G 0% /sys/fs/cgroup 12:43:20 /dev/vda15 105M 4.4M 100M 5% /boot/efi 12:43:20 tmpfs 3.2G 0 3.2G 0% /run/user/1001 12:43:20 12:43:20 12:43:20 ---> free -m: 12:43:20 total used free shared buff/cache available 12:43:20 Mem: 32167 886 25335 0 5944 30824 12:43:20 Swap: 1023 0 1023 12:43:20 12:43:20 12:43:20 ---> ip addr: 12:43:20 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 12:43:20 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 12:43:20 inet 127.0.0.1/8 scope host lo 12:43:20 valid_lft forever preferred_lft forever 12:43:20 inet6 ::1/128 scope host 12:43:20 valid_lft forever preferred_lft forever 12:43:20 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 12:43:20 link/ether fa:16:3e:5e:a0:f1 brd ff:ff:ff:ff:ff:ff 12:43:20 inet 10.30.107.33/23 brd 10.30.107.255 scope global dynamic ens3 12:43:20 valid_lft 85771sec preferred_lft 85771sec 12:43:20 inet6 fe80::f816:3eff:fe5e:a0f1/64 scope link 12:43:20 valid_lft forever preferred_lft forever 12:43:20 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 12:43:20 link/ether 02:42:66:91:ec:39 brd ff:ff:ff:ff:ff:ff 12:43:20 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 12:43:20 valid_lft forever preferred_lft forever 12:43:20 12:43:20 12:43:20 ---> sar -b -r -n DEV: 12:43:20 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-26122) 04/25/24 _x86_64_ (8 CPU) 12:43:20 12:43:20 12:32:53 LINUX RESTART (8 CPU) 12:43:20 12:43:20 12:33:01 tps rtps wtps bread/s bwrtn/s 12:43:20 12:34:03 116.20 70.05 46.14 5272.19 48403.13 12:43:20 12:35:01 84.19 18.17 66.02 1055.13 20018.20 12:43:20 12:36:01 86.84 14.08 72.76 1127.97 21832.42 12:43:20 12:37:01 76.68 10.00 66.68 1720.75 19406.09 12:43:20 12:38:01 82.21 0.05 82.16 5.60 45877.19 12:43:20 12:39:01 121.85 0.07 121.78 2.80 85055.82 12:43:20 12:40:01 319.78 11.56 308.22 761.04 33597.30 12:43:20 12:41:01 22.16 0.27 21.90 12.53 13334.96 12:43:20 12:42:01 11.26 0.02 11.25 2.93 13508.72 12:43:20 12:43:01 67.76 1.22 66.54 103.32 16506.55 12:43:20 Average: 98.94 12.53 86.41 1006.33 31791.76 12:43:20 12:43:20 12:33:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 12:43:20 12:34:03 30387324 31720188 2551896 7.75 45236 1611704 1444140 4.25 813924 1472748 47432 12:43:20 12:35:01 30136624 31710780 2802596 8.51 68908 1816184 1406212 4.14 858012 1652900 158200 12:43:20 12:36:01 29839608 31666088 3099612 9.41 82780 2042332 1494828 4.40 924852 1856908 140556 12:43:20 12:37:01 28938772 31646064 4000448 12.14 99356 2880888 1378328 4.06 1011048 2618020 769812 12:43:20 12:38:01 27212340 31638208 5726880 17.39 128524 4481992 1434916 4.22 1041392 4217196 1313300 12:43:20 12:39:01 26019896 31631780 6919324 21.01 139440 5608020 1504384 4.43 1058004 5342040 349144 12:43:20 12:40:01 23965816 29736608 8973404 27.24 154980 5733656 8498204 25.00 3115048 5264520 588 12:43:20 12:41:01 23803928 29580212 9135292 27.73 156164 5735880 8822544 25.96 3285380 5249112 308 12:43:20 12:42:01 23811812 29589616 9127408 27.71 156324 5737096 8852616 26.05 3276316 5249308 892 12:43:20 12:43:01 25973132 31592536 6966088 21.15 158000 5595660 1524084 4.48 1322696 5107484 29352 12:43:20 Average: 27008925 31051208 5930295 18.00 118971 4124341 3636026 10.70 1670667 3803024 280958 12:43:20 12:43:20 12:33:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 12:43:20 12:34:03 ens3 327.23 227.21 877.41 57.26 0.00 0.00 0.00 0.00 12:43:20 12:34:03 lo 1.07 1.07 0.10 0.10 0.00 0.00 0.00 0.00 12:43:20 12:34:03 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:35:01 ens3 46.68 32.65 743.14 7.03 0.00 0.00 0.00 0.00 12:43:20 12:35:01 lo 1.65 1.65 0.18 0.18 0.00 0.00 0.00 0.00 12:43:20 12:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:36:01 ens3 41.16 27.75 573.28 8.85 0.00 0.00 0.00 0.00 12:43:20 12:36:01 lo 0.53 0.53 0.06 0.06 0.00 0.00 0.00 0.00 12:43:20 12:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:37:01 br-2592e41f6506 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:37:01 ens3 147.11 96.85 3907.67 14.62 0.00 0.00 0.00 0.00 12:43:20 12:37:01 lo 5.53 5.53 0.52 0.52 0.00 0.00 0.00 0.00 12:43:20 12:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:38:01 br-2592e41f6506 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:38:01 ens3 641.18 269.47 13330.08 21.44 0.00 0.00 0.00 0.00 12:43:20 12:38:01 lo 3.33 3.33 0.35 0.35 0.00 0.00 0.00 0.00 12:43:20 12:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:39:01 br-2592e41f6506 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:39:01 ens3 405.73 186.95 12383.97 13.93 0.00 0.00 0.00 0.00 12:43:20 12:39:01 lo 4.27 4.27 0.40 0.40 0.00 0.00 0.00 0.00 12:43:20 12:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:40:01 vethd23a50d 1.38 2.00 0.16 0.19 0.00 0.00 0.00 0.00 12:43:20 12:40:01 br-2592e41f6506 0.82 0.65 0.06 0.30 0.00 0.00 0.00 0.00 12:43:20 12:40:01 veth74c81d5 0.92 1.13 0.06 0.06 0.00 0.00 0.00 0.00 12:43:20 12:40:01 veth2aed65e 0.00 0.35 0.00 0.02 0.00 0.00 0.00 0.00 12:43:20 12:41:01 vethd23a50d 36.64 39.96 4.47 4.62 0.00 0.00 0.00 0.00 12:43:20 12:41:01 br-2592e41f6506 2.05 2.43 1.82 1.74 0.00 0.00 0.00 0.00 12:43:20 12:41:01 veth74c81d5 18.80 11.90 2.34 1.60 0.00 0.00 0.00 0.00 12:43:20 12:41:01 veth2aed65e 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:42:01 vethd23a50d 0.15 0.33 0.01 0.02 0.00 0.00 0.00 0.00 12:43:20 12:42:01 br-2592e41f6506 1.38 1.60 0.11 0.15 0.00 0.00 0.00 0.00 12:43:20 12:42:01 veth74c81d5 3.18 4.67 0.66 0.36 0.00 0.00 0.00 0.00 12:43:20 12:42:01 veth2aed65e 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:43:01 ens3 1710.46 916.23 31885.58 163.82 0.00 0.00 0.00 0.00 12:43:20 12:43:01 lo 35.59 35.59 6.27 6.27 0.00 0.00 0.00 0.00 12:43:20 12:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 Average: ens3 171.02 91.41 3198.06 16.38 0.00 0.00 0.00 0.00 12:43:20 Average: lo 3.27 3.27 0.60 0.60 0.00 0.00 0.00 0.00 12:43:20 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:43:20 12:43:20 12:43:20 ---> sar -P ALL: 12:43:20 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-26122) 04/25/24 _x86_64_ (8 CPU) 12:43:20 12:43:20 12:32:53 LINUX RESTART (8 CPU) 12:43:20 12:43:20 12:33:01 CPU %user %nice %system %iowait %steal %idle 12:43:20 12:34:03 all 5.31 0.00 0.98 9.90 0.04 83.77 12:43:20 12:34:03 0 6.37 0.00 0.62 2.87 0.03 90.11 12:43:20 12:34:03 1 2.74 0.00 0.53 0.22 0.02 96.50 12:43:20 12:34:03 2 4.11 0.00 0.87 0.68 0.02 94.32 12:43:20 12:34:03 3 7.28 0.00 1.22 1.30 0.03 90.16 12:43:20 12:34:03 4 3.77 0.00 2.04 29.93 0.05 64.21 12:43:20 12:34:03 5 5.44 0.00 0.57 42.29 0.05 51.65 12:43:20 12:34:03 6 8.81 0.00 1.05 1.10 0.03 89.00 12:43:20 12:34:03 7 3.98 0.00 0.94 0.85 0.03 94.20 12:43:20 12:35:01 all 8.26 0.00 0.64 4.33 0.03 86.75 12:43:20 12:35:01 0 6.06 0.00 0.60 0.02 0.00 93.32 12:43:20 12:35:01 1 3.00 0.00 0.29 0.02 0.02 96.67 12:43:20 12:35:01 2 0.59 0.00 0.22 0.41 0.00 98.78 12:43:20 12:35:01 3 0.93 0.00 0.38 0.55 0.02 98.12 12:43:20 12:35:01 4 2.26 0.00 0.26 0.12 0.05 97.30 12:43:20 12:35:01 5 10.51 0.00 0.48 28.53 0.03 60.45 12:43:20 12:35:01 6 13.06 0.00 0.64 2.90 0.03 83.37 12:43:20 12:35:01 7 29.64 0.00 2.26 2.12 0.05 65.92 12:43:20 12:36:01 all 6.62 0.00 0.41 6.75 0.03 86.19 12:43:20 12:36:01 0 0.05 0.00 0.03 0.02 0.00 99.90 12:43:20 12:36:01 1 5.96 0.00 0.37 0.05 0.02 93.60 12:43:20 12:36:01 2 22.63 0.00 0.68 4.96 0.03 71.69 12:43:20 12:36:01 3 6.27 0.00 0.47 3.35 0.02 89.90 12:43:20 12:36:01 4 3.74 0.00 0.32 8.06 0.08 87.80 12:43:20 12:36:01 5 3.62 0.00 0.40 7.02 0.02 88.94 12:43:20 12:36:01 6 9.73 0.00 0.85 30.15 0.02 59.25 12:43:20 12:36:01 7 0.97 0.00 0.13 0.47 0.00 98.43 12:43:20 12:37:01 all 5.60 0.00 1.56 8.37 0.03 84.43 12:43:20 12:37:01 0 2.91 0.00 1.20 0.02 0.02 95.86 12:43:20 12:37:01 1 6.28 0.00 1.93 0.07 0.03 91.69 12:43:20 12:37:01 2 5.48 0.00 1.42 1.72 0.03 91.35 12:43:20 12:37:01 3 7.85 0.00 1.36 18.09 0.03 72.68 12:43:20 12:37:01 4 7.52 0.00 1.96 39.26 0.05 51.21 12:43:20 12:37:01 5 5.33 0.00 1.37 0.08 0.03 93.18 12:43:20 12:37:01 6 5.48 0.00 1.21 7.45 0.03 85.83 12:43:20 12:37:01 7 3.94 0.00 2.04 0.40 0.03 93.58 12:43:20 12:38:01 all 5.85 0.00 2.50 12.41 0.04 79.21 12:43:20 12:38:01 0 4.33 0.00 3.37 0.00 0.03 92.27 12:43:20 12:38:01 1 5.65 0.00 1.84 0.02 0.03 92.45 12:43:20 12:38:01 2 5.54 0.00 2.53 2.01 0.03 89.88 12:43:20 12:38:01 3 5.91 0.00 2.49 22.55 0.03 69.02 12:43:20 12:38:01 4 5.56 0.00 2.14 45.47 0.05 46.78 12:43:20 12:38:01 5 6.38 0.00 1.89 2.39 0.02 89.32 12:43:20 12:38:01 6 6.78 0.00 3.49 24.79 0.05 64.88 12:43:20 12:38:01 7 6.67 0.00 2.21 2.25 0.03 88.83 12:43:20 12:39:01 all 4.99 0.00 2.16 10.39 0.04 82.42 12:43:20 12:39:01 0 3.79 0.00 1.96 0.34 0.02 93.90 12:43:20 12:39:01 1 4.85 0.00 1.59 0.07 0.05 93.44 12:43:20 12:39:01 2 6.66 0.00 1.96 0.97 0.03 90.37 12:43:20 12:39:01 3 5.31 0.00 2.00 0.55 0.05 92.09 12:43:20 12:39:01 4 5.32 0.00 2.04 6.81 0.03 85.80 12:43:20 12:39:01 5 6.00 0.00 2.07 4.91 0.05 86.97 12:43:20 12:39:01 6 2.98 0.00 2.68 47.40 0.05 46.89 12:43:20 12:39:01 7 5.04 0.00 2.96 22.21 0.03 69.75 12:43:20 12:40:01 all 23.55 0.00 3.12 8.69 0.08 64.55 12:43:20 12:40:01 0 15.52 0.00 2.75 8.10 0.08 73.55 12:43:20 12:40:01 1 23.64 0.00 2.78 1.49 0.07 72.02 12:43:20 12:40:01 2 22.77 0.00 3.59 3.43 0.08 70.13 12:43:20 12:40:01 3 22.88 0.00 2.80 3.08 0.08 71.16 12:43:20 12:40:01 4 26.97 0.00 3.33 33.17 0.10 36.43 12:43:20 12:40:01 5 23.88 0.00 3.01 14.49 0.07 58.56 12:43:20 12:40:01 6 25.09 0.00 3.32 3.63 0.08 67.88 12:43:20 12:40:01 7 27.66 0.00 3.40 2.23 0.08 66.63 12:43:20 12:41:01 all 10.01 0.00 1.02 6.35 0.06 82.57 12:43:20 12:41:01 0 12.80 0.00 1.45 2.79 0.07 82.89 12:43:20 12:41:01 1 7.94 0.00 0.89 21.27 0.07 69.83 12:43:20 12:41:01 2 8.84 0.00 0.99 1.92 0.07 88.19 12:43:20 12:41:01 3 9.91 0.00 0.97 3.24 0.05 85.83 12:43:20 12:41:01 4 10.39 0.00 0.89 3.39 0.08 85.25 12:43:20 12:41:01 5 10.42 0.00 1.02 1.79 0.05 86.72 12:43:20 12:41:01 6 9.89 0.00 1.07 15.04 0.05 73.96 12:43:20 12:41:01 7 9.85 0.00 0.89 1.47 0.05 87.74 12:43:20 12:42:01 all 0.77 0.00 0.19 2.12 0.03 96.88 12:43:20 12:42:01 0 0.68 0.00 0.25 0.32 0.05 98.70 12:43:20 12:42:01 1 1.17 0.00 0.20 16.19 0.02 82.42 12:43:20 12:42:01 2 0.70 0.00 0.15 0.13 0.02 99.00 12:43:20 12:42:01 3 0.55 0.00 0.18 0.00 0.03 99.23 12:43:20 12:42:01 4 1.07 0.00 0.17 0.20 0.05 98.51 12:43:20 12:42:01 5 0.85 0.00 0.20 0.13 0.03 98.78 12:43:20 12:42:01 6 0.32 0.00 0.13 0.00 0.02 99.53 12:43:20 12:42:01 7 0.82 0.00 0.25 0.00 0.03 98.90 12:43:20 12:43:01 all 5.62 0.00 0.69 2.72 0.04 90.93 12:43:20 12:43:01 0 1.22 0.00 0.65 1.82 0.02 96.29 12:43:20 12:43:01 1 2.82 0.00 0.62 14.02 0.03 82.50 12:43:20 12:43:01 2 2.60 0.00 0.72 0.18 0.03 96.46 12:43:20 12:43:01 3 1.31 0.00 0.52 2.01 0.03 96.12 12:43:20 12:43:01 4 15.27 0.00 0.80 1.44 0.03 82.46 12:43:20 12:43:01 5 16.21 0.00 0.95 1.00 0.07 81.77 12:43:20 12:43:01 6 1.45 0.00 0.57 0.18 0.03 97.76 12:43:20 12:43:01 7 4.03 0.00 0.67 1.10 0.03 94.16 12:43:20 Average: all 7.65 0.00 1.33 7.21 0.04 83.77 12:43:20 Average: 0 5.37 0.00 1.29 1.63 0.03 91.68 12:43:20 Average: 1 6.41 0.00 1.11 5.35 0.04 87.09 12:43:20 Average: 2 8.01 0.00 1.31 1.65 0.04 88.99 12:43:20 Average: 3 6.84 0.00 1.24 5.48 0.04 86.40 12:43:20 Average: 4 8.20 0.00 1.40 16.79 0.06 73.55 12:43:20 Average: 5 8.85 0.00 1.20 10.20 0.04 79.70 12:43:20 Average: 6 8.34 0.00 1.50 13.27 0.04 76.85 12:43:20 Average: 7 9.19 0.00 1.57 3.30 0.04 85.90 12:43:20 12:43:20 12:43:20