23:11:00 Started by timer 23:11:00 Running as SYSTEM 23:11:00 [EnvInject] - Loading node environment variables. 23:11:00 Building remotely on prd-ubuntu1804-docker-8c-8g-9276 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:11:00 [ssh-agent] Looking for ssh-agent implementation... 23:11:00 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:11:00 $ ssh-agent 23:11:01 SSH_AUTH_SOCK=/tmp/ssh-bZlmftFGBnYv/agent.2154 23:11:01 SSH_AGENT_PID=2156 23:11:01 [ssh-agent] Started. 23:11:01 Running ssh-add (command line suppressed) 23:11:01 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_6955072512035191371.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_6955072512035191371.key) 23:11:01 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:11:01 The recommended git tool is: NONE 23:11:02 using credential onap-jenkins-ssh 23:11:02 Wiping out workspace first. 23:11:02 Cloning the remote Git repository 23:11:02 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:02 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:02 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:02 > git --version # timeout=10 23:11:02 > git --version # 'git version 2.17.1' 23:11:02 using GIT_SSH to set credentials Gerrit user 23:11:02 Verifying host key using manually-configured host key entries 23:11:02 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:03 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:03 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:03 Avoid second fetch 23:11:03 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:03 Checking out Revision 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 (refs/remotes/origin/master) 23:11:03 > git config core.sparsecheckout # timeout=10 23:11:03 > git checkout -f 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 # timeout=30 23:11:03 Commit message: "Fix config files removing hibernate deprecated properties and changing robot deprecated commands in test files" 23:11:03 > git rev-list --no-walk 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 # timeout=10 23:11:03 provisioning config files... 23:11:03 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:03 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:03 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins460563352004639101.sh 23:11:04 ---> python-tools-install.sh 23:11:04 Setup pyenv: 23:11:04 * system (set by /opt/pyenv/version) 23:11:04 * 3.8.13 (set by /opt/pyenv/version) 23:11:04 * 3.9.13 (set by /opt/pyenv/version) 23:11:04 * 3.10.6 (set by /opt/pyenv/version) 23:11:08 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-biv1 23:11:08 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:11 lf-activate-venv(): INFO: Installing: lftools 23:11:47 lf-activate-venv(): INFO: Adding /tmp/venv-biv1/bin to PATH 23:11:47 Generating Requirements File 23:12:16 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:12:16 lftools 0.37.9 requires openstacksdk>=2.1.0, but you have openstacksdk 0.62.0 which is incompatible. 23:12:16 Python 3.10.6 23:12:17 pip 24.0 from /tmp/venv-biv1/lib/python3.10/site-packages/pip (python 3.10) 23:12:17 appdirs==1.4.4 23:12:17 argcomplete==3.2.2 23:12:17 aspy.yaml==1.3.0 23:12:17 attrs==23.2.0 23:12:17 autopage==0.5.2 23:12:17 beautifulsoup4==4.12.3 23:12:17 boto3==1.34.51 23:12:17 botocore==1.34.51 23:12:17 bs4==0.0.2 23:12:17 cachetools==5.3.3 23:12:17 certifi==2024.2.2 23:12:17 cffi==1.16.0 23:12:17 cfgv==3.4.0 23:12:17 chardet==5.2.0 23:12:17 charset-normalizer==3.3.2 23:12:17 click==8.1.7 23:12:17 cliff==4.6.0 23:12:17 cmd2==2.4.3 23:12:17 cryptography==3.3.2 23:12:17 debtcollector==3.0.0 23:12:17 decorator==5.1.1 23:12:17 defusedxml==0.7.1 23:12:17 Deprecated==1.2.14 23:12:17 distlib==0.3.8 23:12:17 dnspython==2.6.1 23:12:17 docker==4.2.2 23:12:17 dogpile.cache==1.3.2 23:12:17 email_validator==2.1.1 23:12:17 filelock==3.13.1 23:12:17 future==1.0.0 23:12:17 gitdb==4.0.11 23:12:17 GitPython==3.1.42 23:12:17 google-auth==2.28.1 23:12:17 httplib2==0.22.0 23:12:17 identify==2.5.35 23:12:17 idna==3.6 23:12:17 importlib-resources==1.5.0 23:12:17 iso8601==2.1.0 23:12:17 Jinja2==3.1.3 23:12:17 jmespath==1.0.1 23:12:17 jsonpatch==1.33 23:12:17 jsonpointer==2.4 23:12:17 jsonschema==4.21.1 23:12:17 jsonschema-specifications==2023.12.1 23:12:17 keystoneauth1==5.6.0 23:12:17 kubernetes==29.0.0 23:12:17 lftools==0.37.9 23:12:17 lxml==5.1.0 23:12:17 MarkupSafe==2.1.5 23:12:17 msgpack==1.0.7 23:12:17 multi_key_dict==2.0.3 23:12:17 munch==4.0.0 23:12:17 netaddr==1.2.1 23:12:17 netifaces==0.11.0 23:12:17 niet==1.4.2 23:12:17 nodeenv==1.8.0 23:12:17 oauth2client==4.1.3 23:12:17 oauthlib==3.2.2 23:12:17 openstacksdk==0.62.0 23:12:17 os-client-config==2.1.0 23:12:17 os-service-types==1.7.0 23:12:17 osc-lib==3.0.1 23:12:17 oslo.config==9.4.0 23:12:17 oslo.context==5.4.0 23:12:17 oslo.i18n==6.3.0 23:12:17 oslo.log==5.5.0 23:12:17 oslo.serialization==5.4.0 23:12:17 oslo.utils==7.1.0 23:12:17 packaging==23.2 23:12:17 pbr==6.0.0 23:12:17 platformdirs==4.2.0 23:12:17 prettytable==3.10.0 23:12:17 pyasn1==0.5.1 23:12:17 pyasn1-modules==0.3.0 23:12:17 pycparser==2.21 23:12:17 pygerrit2==2.0.15 23:12:17 PyGithub==2.2.0 23:12:17 pyinotify==0.9.6 23:12:17 PyJWT==2.8.0 23:12:17 PyNaCl==1.5.0 23:12:17 pyparsing==2.4.7 23:12:17 pyperclip==1.8.2 23:12:17 pyrsistent==0.20.0 23:12:17 python-cinderclient==9.4.0 23:12:17 python-dateutil==2.8.2 23:12:17 python-heatclient==3.4.0 23:12:17 python-jenkins==1.8.2 23:12:17 python-keystoneclient==5.3.0 23:12:17 python-magnumclient==4.3.0 23:12:17 python-novaclient==18.4.0 23:12:17 python-openstackclient==6.0.1 23:12:17 python-swiftclient==4.4.0 23:12:17 PyYAML==6.0.1 23:12:17 referencing==0.33.0 23:12:17 requests==2.31.0 23:12:17 requests-oauthlib==1.3.1 23:12:17 requestsexceptions==1.4.0 23:12:17 rfc3986==2.0.0 23:12:17 rpds-py==0.18.0 23:12:17 rsa==4.9 23:12:17 ruamel.yaml==0.18.6 23:12:17 ruamel.yaml.clib==0.2.8 23:12:17 s3transfer==0.10.0 23:12:17 simplejson==3.19.2 23:12:17 six==1.16.0 23:12:17 smmap==5.0.1 23:12:17 soupsieve==2.5 23:12:17 stevedore==5.2.0 23:12:17 tabulate==0.9.0 23:12:17 toml==0.10.2 23:12:17 tomlkit==0.12.4 23:12:17 tqdm==4.66.2 23:12:17 typing_extensions==4.10.0 23:12:17 tzdata==2024.1 23:12:17 urllib3==1.26.18 23:12:17 virtualenv==20.25.1 23:12:17 wcwidth==0.2.13 23:12:17 websocket-client==1.7.0 23:12:17 wrapt==1.16.0 23:12:17 xdg==6.0.0 23:12:17 xmltodict==0.13.0 23:12:17 yq==3.2.3 23:12:17 [EnvInject] - Injecting environment variables from a build step. 23:12:17 [EnvInject] - Injecting as environment variables the properties content 23:12:17 SET_JDK_VERSION=openjdk17 23:12:17 GIT_URL="git://cloud.onap.org/mirror" 23:12:17 23:12:17 [EnvInject] - Variables injected successfully. 23:12:17 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins941769177188030158.sh 23:12:17 ---> update-java-alternatives.sh 23:12:17 ---> Updating Java version 23:12:17 ---> Ubuntu/Debian system detected 23:12:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:18 openjdk version "17.0.4" 2022-07-19 23:12:18 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:18 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:18 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:18 [EnvInject] - Injecting environment variables from a build step. 23:12:18 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:18 [EnvInject] - Variables injected successfully. 23:12:18 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins7989161888070880849.sh 23:12:18 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:18 + set +u 23:12:18 + save_set 23:12:18 + RUN_CSIT_SAVE_SET=ehxB 23:12:18 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:18 + '[' 1 -eq 0 ']' 23:12:18 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:18 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:18 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:18 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:18 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:18 + export ROBOT_VARIABLES= 23:12:18 + ROBOT_VARIABLES= 23:12:18 + export PROJECT=pap 23:12:18 + PROJECT=pap 23:12:18 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:18 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:18 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:18 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:18 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:18 + relax_set 23:12:18 + set +e 23:12:18 + set +o pipefail 23:12:18 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:18 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:18 +++ mktemp -d 23:12:18 ++ ROBOT_VENV=/tmp/tmp.okrUEtYTcG 23:12:18 ++ echo ROBOT_VENV=/tmp/tmp.okrUEtYTcG 23:12:18 +++ python3 --version 23:12:18 ++ echo 'Python version is: Python 3.6.9' 23:12:18 Python version is: Python 3.6.9 23:12:18 ++ python3 -m venv --clear /tmp/tmp.okrUEtYTcG 23:12:19 ++ source /tmp/tmp.okrUEtYTcG/bin/activate 23:12:19 +++ deactivate nondestructive 23:12:19 +++ '[' -n '' ']' 23:12:19 +++ '[' -n '' ']' 23:12:19 +++ '[' -n /bin/bash -o -n '' ']' 23:12:19 +++ hash -r 23:12:19 +++ '[' -n '' ']' 23:12:19 +++ unset VIRTUAL_ENV 23:12:19 +++ '[' '!' nondestructive = nondestructive ']' 23:12:19 +++ VIRTUAL_ENV=/tmp/tmp.okrUEtYTcG 23:12:19 +++ export VIRTUAL_ENV 23:12:19 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:19 +++ PATH=/tmp/tmp.okrUEtYTcG/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:19 +++ export PATH 23:12:19 +++ '[' -n '' ']' 23:12:19 +++ '[' -z '' ']' 23:12:19 +++ _OLD_VIRTUAL_PS1= 23:12:19 +++ '[' 'x(tmp.okrUEtYTcG) ' '!=' x ']' 23:12:19 +++ PS1='(tmp.okrUEtYTcG) ' 23:12:19 +++ export PS1 23:12:19 +++ '[' -n /bin/bash -o -n '' ']' 23:12:19 +++ hash -r 23:12:19 ++ set -exu 23:12:19 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:23 ++ echo 'Installing Python Requirements' 23:12:23 Installing Python Requirements 23:12:23 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:41 ++ python3 -m pip -qq freeze 23:12:41 bcrypt==4.0.1 23:12:41 beautifulsoup4==4.12.3 23:12:41 bitarray==2.9.2 23:12:41 certifi==2024.2.2 23:12:41 cffi==1.15.1 23:12:41 charset-normalizer==2.0.12 23:12:41 cryptography==40.0.2 23:12:41 decorator==5.1.1 23:12:41 elasticsearch==7.17.9 23:12:41 elasticsearch-dsl==7.4.1 23:12:41 enum34==1.1.10 23:12:41 idna==3.6 23:12:41 importlib-resources==5.4.0 23:12:41 ipaddr==2.2.0 23:12:41 isodate==0.6.1 23:12:41 jmespath==0.10.0 23:12:41 jsonpatch==1.32 23:12:41 jsonpath-rw==1.4.0 23:12:41 jsonpointer==2.3 23:12:41 lxml==5.1.0 23:12:41 netaddr==0.8.0 23:12:41 netifaces==0.11.0 23:12:41 odltools==0.1.28 23:12:41 paramiko==3.4.0 23:12:41 pkg_resources==0.0.0 23:12:41 ply==3.11 23:12:41 pyang==2.6.0 23:12:41 pyangbind==0.8.1 23:12:41 pycparser==2.21 23:12:41 pyhocon==0.3.60 23:12:41 PyNaCl==1.5.0 23:12:41 pyparsing==3.1.1 23:12:41 python-dateutil==2.8.2 23:12:41 regex==2023.8.8 23:12:41 requests==2.27.1 23:12:41 robotframework==6.1.1 23:12:41 robotframework-httplibrary==0.4.2 23:12:41 robotframework-pythonlibcore==3.0.0 23:12:41 robotframework-requests==0.9.4 23:12:41 robotframework-selenium2library==3.0.0 23:12:41 robotframework-seleniumlibrary==5.1.3 23:12:41 robotframework-sshlibrary==3.8.0 23:12:41 scapy==2.5.0 23:12:41 scp==0.14.5 23:12:41 selenium==3.141.0 23:12:41 six==1.16.0 23:12:41 soupsieve==2.3.2.post1 23:12:41 urllib3==1.26.18 23:12:41 waitress==2.0.0 23:12:41 WebOb==1.8.7 23:12:41 WebTest==3.0.0 23:12:41 zipp==3.6.0 23:12:41 ++ mkdir -p /tmp/tmp.okrUEtYTcG/src/onap 23:12:41 ++ rm -rf /tmp/tmp.okrUEtYTcG/src/onap/testsuite 23:12:41 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:47 ++ echo 'Installing python confluent-kafka library' 23:12:47 Installing python confluent-kafka library 23:12:47 ++ python3 -m pip install -qq confluent-kafka 23:12:48 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:48 Uninstall docker-py and reinstall docker. 23:12:48 ++ python3 -m pip uninstall -y -qq docker 23:12:49 ++ python3 -m pip install -U -qq docker 23:12:50 ++ python3 -m pip -qq freeze 23:12:50 bcrypt==4.0.1 23:12:50 beautifulsoup4==4.12.3 23:12:50 bitarray==2.9.2 23:12:50 certifi==2024.2.2 23:12:50 cffi==1.15.1 23:12:50 charset-normalizer==2.0.12 23:12:50 confluent-kafka==2.3.0 23:12:50 cryptography==40.0.2 23:12:50 decorator==5.1.1 23:12:50 deepdiff==5.7.0 23:12:50 dnspython==2.2.1 23:12:50 docker==5.0.3 23:12:50 elasticsearch==7.17.9 23:12:50 elasticsearch-dsl==7.4.1 23:12:50 enum34==1.1.10 23:12:50 future==1.0.0 23:12:50 idna==3.6 23:12:50 importlib-resources==5.4.0 23:12:50 ipaddr==2.2.0 23:12:50 isodate==0.6.1 23:12:50 Jinja2==3.0.3 23:12:50 jmespath==0.10.0 23:12:50 jsonpatch==1.32 23:12:50 jsonpath-rw==1.4.0 23:12:50 jsonpointer==2.3 23:12:50 kafka-python==2.0.2 23:12:50 lxml==5.1.0 23:12:50 MarkupSafe==2.0.1 23:12:50 more-itertools==5.0.0 23:12:50 netaddr==0.8.0 23:12:50 netifaces==0.11.0 23:12:50 odltools==0.1.28 23:12:50 ordered-set==4.0.2 23:12:50 paramiko==3.4.0 23:12:50 pbr==6.0.0 23:12:50 pkg_resources==0.0.0 23:12:50 ply==3.11 23:12:50 protobuf==3.19.6 23:12:50 pyang==2.6.0 23:12:50 pyangbind==0.8.1 23:12:50 pycparser==2.21 23:12:50 pyhocon==0.3.60 23:12:50 PyNaCl==1.5.0 23:12:50 pyparsing==3.1.1 23:12:50 python-dateutil==2.8.2 23:12:50 PyYAML==6.0.1 23:12:50 regex==2023.8.8 23:12:50 requests==2.27.1 23:12:50 robotframework==6.1.1 23:12:50 robotframework-httplibrary==0.4.2 23:12:50 robotframework-onap==0.6.0.dev105 23:12:50 robotframework-pythonlibcore==3.0.0 23:12:50 robotframework-requests==0.9.4 23:12:50 robotframework-selenium2library==3.0.0 23:12:50 robotframework-seleniumlibrary==5.1.3 23:12:50 robotframework-sshlibrary==3.8.0 23:12:50 robotlibcore-temp==1.0.2 23:12:50 scapy==2.5.0 23:12:50 scp==0.14.5 23:12:50 selenium==3.141.0 23:12:50 six==1.16.0 23:12:50 soupsieve==2.3.2.post1 23:12:50 urllib3==1.26.18 23:12:50 waitress==2.0.0 23:12:50 WebOb==1.8.7 23:12:50 websocket-client==1.3.1 23:12:50 WebTest==3.0.0 23:12:50 zipp==3.6.0 23:12:50 ++ uname 23:12:50 ++ grep -q Linux 23:12:50 ++ sudo apt-get -y -qq install libxml2-utils 23:12:50 + load_set 23:12:50 + _setopts=ehuxB 23:12:50 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:50 ++ tr : ' ' 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o braceexpand 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o hashall 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o interactive-comments 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o nounset 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o xtrace 23:12:50 ++ echo ehuxB 23:12:50 ++ sed 's/./& /g' 23:12:50 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:50 + set +e 23:12:50 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:50 + set +h 23:12:50 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:50 + set +u 23:12:50 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:50 + set +x 23:12:50 + source_safely /tmp/tmp.okrUEtYTcG/bin/activate 23:12:50 + '[' -z /tmp/tmp.okrUEtYTcG/bin/activate ']' 23:12:50 + relax_set 23:12:50 + set +e 23:12:50 + set +o pipefail 23:12:50 + . /tmp/tmp.okrUEtYTcG/bin/activate 23:12:50 ++ deactivate nondestructive 23:12:50 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:50 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:50 ++ export PATH 23:12:50 ++ unset _OLD_VIRTUAL_PATH 23:12:50 ++ '[' -n '' ']' 23:12:50 ++ '[' -n /bin/bash -o -n '' ']' 23:12:50 ++ hash -r 23:12:50 ++ '[' -n '' ']' 23:12:50 ++ unset VIRTUAL_ENV 23:12:50 ++ '[' '!' nondestructive = nondestructive ']' 23:12:50 ++ VIRTUAL_ENV=/tmp/tmp.okrUEtYTcG 23:12:50 ++ export VIRTUAL_ENV 23:12:50 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:50 ++ PATH=/tmp/tmp.okrUEtYTcG/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:50 ++ export PATH 23:12:50 ++ '[' -n '' ']' 23:12:50 ++ '[' -z '' ']' 23:12:50 ++ _OLD_VIRTUAL_PS1='(tmp.okrUEtYTcG) ' 23:12:50 ++ '[' 'x(tmp.okrUEtYTcG) ' '!=' x ']' 23:12:50 ++ PS1='(tmp.okrUEtYTcG) (tmp.okrUEtYTcG) ' 23:12:50 ++ export PS1 23:12:50 ++ '[' -n /bin/bash -o -n '' ']' 23:12:50 ++ hash -r 23:12:50 + load_set 23:12:50 + _setopts=hxB 23:12:50 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:50 ++ tr : ' ' 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o braceexpand 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o hashall 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o interactive-comments 23:12:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:50 + set +o xtrace 23:12:50 ++ echo hxB 23:12:50 ++ sed 's/./& /g' 23:12:50 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:50 + set +h 23:12:50 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:50 + set +x 23:12:50 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:50 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:50 + export TEST_OPTIONS= 23:12:50 + TEST_OPTIONS= 23:12:50 ++ mktemp -d 23:12:50 + WORKDIR=/tmp/tmp.KIs5KFRoI8 23:12:50 + cd /tmp/tmp.KIs5KFRoI8 23:12:50 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:51 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:51 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:51 Configure a credential helper to remove this warning. See 23:12:51 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:51 23:12:51 Login Succeeded 23:12:51 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:51 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:51 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:51 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:51 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:51 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:51 + relax_set 23:12:51 + set +e 23:12:51 + set +o pipefail 23:12:51 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:51 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:51 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:51 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:51 +++ GERRIT_BRANCH=master 23:12:51 +++ echo GERRIT_BRANCH=master 23:12:51 GERRIT_BRANCH=master 23:12:51 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:51 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:51 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:51 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:52 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:52 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:52 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:52 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:52 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:52 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:52 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:52 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:52 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:52 +++ grafana=false 23:12:52 +++ gui=false 23:12:52 +++ [[ 2 -gt 0 ]] 23:12:52 +++ key=apex-pdp 23:12:52 +++ case $key in 23:12:52 +++ echo apex-pdp 23:12:52 apex-pdp 23:12:52 +++ component=apex-pdp 23:12:52 +++ shift 23:12:52 +++ [[ 1 -gt 0 ]] 23:12:52 +++ key=--grafana 23:12:52 +++ case $key in 23:12:52 +++ grafana=true 23:12:52 +++ shift 23:12:52 +++ [[ 0 -gt 0 ]] 23:12:52 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:52 +++ echo 'Configuring docker compose...' 23:12:52 Configuring docker compose... 23:12:52 +++ source export-ports.sh 23:12:52 +++ source get-versions.sh 23:12:54 +++ '[' -z pap ']' 23:12:54 +++ '[' -n apex-pdp ']' 23:12:54 +++ '[' apex-pdp == logs ']' 23:12:54 +++ '[' true = true ']' 23:12:54 +++ echo 'Starting apex-pdp application with Grafana' 23:12:54 Starting apex-pdp application with Grafana 23:12:54 +++ docker-compose up -d apex-pdp grafana 23:12:54 Creating network "compose_default" with the default driver 23:12:55 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:55 latest: Pulling from prom/prometheus 23:12:58 Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e 23:12:58 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:58 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:58 latest: Pulling from grafana/grafana 23:13:02 Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 23:13:02 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:02 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:03 10.10.2: Pulling from mariadb 23:13:07 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:07 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:07 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:07 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:11 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 23:13:11 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:11 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:11 latest: Pulling from confluentinc/cp-zookeeper 23:13:22 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:22 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:22 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:23 latest: Pulling from confluentinc/cp-kafka 23:13:25 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:25 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:25 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:26 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:33 Digest: sha256:59b5cc74cb5bbcb86c2e85d974415cfa4a6270c5728a7a489a5c6eece42f2b45 23:13:34 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:35 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:39 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:45 Digest: sha256:71cc3c3555fddbd324c5ddec27e24db340b82732d2f6ce50eddcfdf6715a7ab2 23:13:45 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:45 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:46 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:48 Digest: sha256:448850bc9066413f6555e9c62d97da12eaa2c454a1304262987462aae46f4676 23:13:48 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:48 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:48 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:59 Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 23:13:59 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:14:00 Creating mariadb ... 23:14:00 Creating prometheus ... 23:14:00 Creating compose_zookeeper_1 ... 23:14:00 Creating simulator ... 23:14:13 Creating prometheus ... done 23:14:13 Creating grafana ... 23:14:13 Creating grafana ... done 23:14:15 Creating compose_zookeeper_1 ... done 23:14:15 Creating kafka ... 23:14:16 Creating kafka ... done 23:14:17 Creating mariadb ... done 23:14:17 Creating policy-db-migrator ... 23:14:18 Creating policy-db-migrator ... done 23:14:18 Creating policy-api ... 23:14:19 Creating simulator ... done 23:14:20 Creating policy-api ... done 23:14:20 Creating policy-pap ... 23:14:21 Creating policy-pap ... done 23:14:21 Creating policy-apex-pdp ... 23:14:22 Creating policy-apex-pdp ... done 23:14:22 +++ echo 'Prometheus server: http://localhost:30259' 23:14:22 Prometheus server: http://localhost:30259 23:14:22 +++ echo 'Grafana server: http://localhost:30269' 23:14:22 Grafana server: http://localhost:30269 23:14:22 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:22 ++ sleep 10 23:14:32 ++ unset http_proxy https_proxy 23:14:32 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:32 Waiting for REST to come up on localhost port 30003... 23:14:32 NAMES STATUS 23:14:32 policy-apex-pdp Up 10 seconds 23:14:32 policy-pap Up 11 seconds 23:14:32 policy-api Up 12 seconds 23:14:32 kafka Up 16 seconds 23:14:32 grafana Up 18 seconds 23:14:32 compose_zookeeper_1 Up 17 seconds 23:14:32 simulator Up 13 seconds 23:14:32 prometheus Up 19 seconds 23:14:32 mariadb Up 15 seconds 23:14:37 NAMES STATUS 23:14:37 policy-apex-pdp Up 15 seconds 23:14:37 policy-pap Up 16 seconds 23:14:37 policy-api Up 17 seconds 23:14:37 kafka Up 21 seconds 23:14:37 grafana Up 23 seconds 23:14:37 compose_zookeeper_1 Up 22 seconds 23:14:37 simulator Up 18 seconds 23:14:37 prometheus Up 24 seconds 23:14:37 mariadb Up 20 seconds 23:14:42 NAMES STATUS 23:14:42 policy-apex-pdp Up 20 seconds 23:14:42 policy-pap Up 21 seconds 23:14:42 policy-api Up 22 seconds 23:14:42 kafka Up 26 seconds 23:14:42 grafana Up 28 seconds 23:14:42 compose_zookeeper_1 Up 27 seconds 23:14:42 simulator Up 23 seconds 23:14:42 prometheus Up 29 seconds 23:14:42 mariadb Up 25 seconds 23:14:47 NAMES STATUS 23:14:47 policy-apex-pdp Up 25 seconds 23:14:47 policy-pap Up 26 seconds 23:14:47 policy-api Up 27 seconds 23:14:47 kafka Up 31 seconds 23:14:47 grafana Up 33 seconds 23:14:47 compose_zookeeper_1 Up 32 seconds 23:14:47 simulator Up 28 seconds 23:14:47 prometheus Up 34 seconds 23:14:47 mariadb Up 30 seconds 23:14:52 NAMES STATUS 23:14:52 policy-apex-pdp Up 30 seconds 23:14:52 policy-pap Up 31 seconds 23:14:52 policy-api Up 32 seconds 23:14:52 kafka Up 36 seconds 23:14:52 grafana Up 39 seconds 23:14:52 compose_zookeeper_1 Up 37 seconds 23:14:52 simulator Up 33 seconds 23:14:52 prometheus Up 39 seconds 23:14:52 mariadb Up 35 seconds 23:14:57 NAMES STATUS 23:14:57 policy-apex-pdp Up 35 seconds 23:14:57 policy-pap Up 36 seconds 23:14:57 policy-api Up 37 seconds 23:14:57 kafka Up 41 seconds 23:14:57 grafana Up 44 seconds 23:14:57 compose_zookeeper_1 Up 42 seconds 23:14:57 simulator Up 38 seconds 23:14:57 prometheus Up 44 seconds 23:14:57 mariadb Up 40 seconds 23:14:58 ++ export 'SUITES=pap-test.robot 23:14:58 pap-slas.robot' 23:14:58 ++ SUITES='pap-test.robot 23:14:58 pap-slas.robot' 23:14:58 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:58 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:58 + load_set 23:14:58 + _setopts=hxB 23:14:58 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:58 ++ tr : ' ' 23:14:58 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:58 + set +o braceexpand 23:14:58 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:58 + set +o hashall 23:14:58 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:58 + set +o interactive-comments 23:14:58 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:58 + set +o xtrace 23:14:58 ++ echo hxB 23:14:58 ++ sed 's/./& /g' 23:14:58 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:58 + set +h 23:14:58 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:58 + set +x 23:14:58 + docker_stats 23:14:58 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:58 ++ uname -s 23:14:58 + '[' Linux == Darwin ']' 23:14:58 + sh -c 'top -bn1 | head -3' 23:14:58 top - 23:14:58 up 4 min, 0 users, load average: 2.87, 1.29, 0.51 23:14:58 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:58 %Cpu(s): 12.8 us, 2.6 sy, 0.0 ni, 80.2 id, 4.3 wa, 0.0 hi, 0.0 si, 0.1 st 23:14:58 + echo 23:14:58 + sh -c 'free -h' 23:14:58 23:14:58 total used free shared buff/cache available 23:14:58 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 23:14:58 Swap: 1.0G 0B 1.0G 23:14:58 + echo 23:14:58 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:58 23:14:58 NAMES STATUS 23:14:58 policy-apex-pdp Up 35 seconds 23:14:58 policy-pap Up 36 seconds 23:14:58 policy-api Up 38 seconds 23:14:58 kafka Up 42 seconds 23:14:58 grafana Up 44 seconds 23:14:58 compose_zookeeper_1 Up 43 seconds 23:14:58 simulator Up 39 seconds 23:14:58 prometheus Up 45 seconds 23:14:58 mariadb Up 41 seconds 23:14:58 + echo 23:14:58 + docker stats --no-stream 23:14:58 23:15:00 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:15:00 3e9ab3cda2b9 policy-apex-pdp 3.66% 180.7MiB / 31.41GiB 0.56% 10kB / 19.8kB 0B / 0B 49 23:15:00 bb1110758709 policy-pap 1.93% 604.9MiB / 31.41GiB 1.88% 31.2kB / 33kB 0B / 153MB 61 23:15:00 61e815c36e15 policy-api 0.11% 411.9MiB / 31.41GiB 1.28% 1MB / 711kB 0B / 0B 53 23:15:00 29a4774add15 kafka 3.72% 387MiB / 31.41GiB 1.20% 73.6kB / 77.3kB 0B / 475kB 85 23:15:00 b868b67ee7c3 grafana 0.15% 58.35MiB / 31.41GiB 0.18% 19.2kB / 3.38kB 0B / 24MB 17 23:15:00 d1a6c9d635c2 compose_zookeeper_1 0.08% 100.7MiB / 31.41GiB 0.31% 56.6kB / 49.9kB 0B / 401kB 60 23:15:00 7503b809b89d simulator 0.07% 120.7MiB / 31.41GiB 0.38% 1.27kB / 0B 0B / 0B 76 23:15:00 dc846f0f4702 prometheus 0.00% 18.89MiB / 31.41GiB 0.06% 13.1kB / 1.11kB 28.7kB / 0B 11 23:15:00 9a23d5aa5370 mariadb 0.02% 102MiB / 31.41GiB 0.32% 997kB / 1.19MB 11.1MB / 46.4MB 35 23:15:00 + echo 23:15:00 23:15:00 + cd /tmp/tmp.KIs5KFRoI8 23:15:00 + echo 'Reading the testplan:' 23:15:00 Reading the testplan: 23:15:00 + echo 'pap-test.robot 23:15:00 pap-slas.robot' 23:15:00 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:15:00 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:15:00 + cat testplan.txt 23:15:00 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:15:00 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:00 ++ xargs 23:15:00 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:15:00 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:00 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:00 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:00 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:15:00 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:15:00 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:15:00 + relax_set 23:15:00 + set +e 23:15:00 + set +o pipefail 23:15:00 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:01 ============================================================================== 23:15:01 pap 23:15:01 ============================================================================== 23:15:01 pap.Pap-Test 23:15:01 ============================================================================== 23:15:02 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:15:02 ------------------------------------------------------------------------------ 23:15:02 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:15:02 ------------------------------------------------------------------------------ 23:15:02 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:15:02 ------------------------------------------------------------------------------ 23:15:03 Healthcheck :: Verify policy pap health check | PASS | 23:15:03 ------------------------------------------------------------------------------ 23:15:23 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:23 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:24 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:24 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:24 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:24 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:25 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:25 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:25 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:25 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:26 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:26 ------------------------------------------------------------------------------ 23:15:26 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:26 ------------------------------------------------------------------------------ 23:15:26 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:26 ------------------------------------------------------------------------------ 23:15:46 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:46 ------------------------------------------------------------------------------ 23:15:46 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:46 ------------------------------------------------------------------------------ 23:15:46 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:46 ------------------------------------------------------------------------------ 23:15:47 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:47 ------------------------------------------------------------------------------ 23:15:47 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:47 ------------------------------------------------------------------------------ 23:15:47 pap.Pap-Test | PASS | 23:15:47 22 tests, 22 passed, 0 failed 23:15:47 ============================================================================== 23:15:47 pap.Pap-Slas 23:15:47 ============================================================================== 23:16:47 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:47 ------------------------------------------------------------------------------ 23:16:47 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:47 ------------------------------------------------------------------------------ 23:16:47 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:47 ------------------------------------------------------------------------------ 23:16:47 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:47 ------------------------------------------------------------------------------ 23:16:47 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:47 ------------------------------------------------------------------------------ 23:16:47 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:47 ------------------------------------------------------------------------------ 23:16:47 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:47 ------------------------------------------------------------------------------ 23:16:47 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:47 ------------------------------------------------------------------------------ 23:16:47 pap.Pap-Slas | PASS | 23:16:47 8 tests, 8 passed, 0 failed 23:16:47 ============================================================================== 23:16:47 pap | PASS | 23:16:47 30 tests, 30 passed, 0 failed 23:16:47 ============================================================================== 23:16:47 Output: /tmp/tmp.KIs5KFRoI8/output.xml 23:16:47 Log: /tmp/tmp.KIs5KFRoI8/log.html 23:16:47 Report: /tmp/tmp.KIs5KFRoI8/report.html 23:16:47 + RESULT=0 23:16:47 + load_set 23:16:47 + _setopts=hxB 23:16:47 ++ tr : ' ' 23:16:47 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:47 + set +o braceexpand 23:16:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:47 + set +o hashall 23:16:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:47 + set +o interactive-comments 23:16:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:47 + set +o xtrace 23:16:47 ++ echo hxB 23:16:47 ++ sed 's/./& /g' 23:16:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:47 + set +h 23:16:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:47 + set +x 23:16:47 + echo 'RESULT: 0' 23:16:47 RESULT: 0 23:16:47 + exit 0 23:16:47 + on_exit 23:16:47 + rc=0 23:16:47 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:47 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:47 NAMES STATUS 23:16:47 policy-apex-pdp Up 2 minutes 23:16:47 policy-pap Up 2 minutes 23:16:47 policy-api Up 2 minutes 23:16:47 kafka Up 2 minutes 23:16:47 grafana Up 2 minutes 23:16:47 compose_zookeeper_1 Up 2 minutes 23:16:47 simulator Up 2 minutes 23:16:47 prometheus Up 2 minutes 23:16:47 mariadb Up 2 minutes 23:16:47 + docker_stats 23:16:47 ++ uname -s 23:16:47 + '[' Linux == Darwin ']' 23:16:47 + sh -c 'top -bn1 | head -3' 23:16:47 top - 23:16:47 up 6 min, 0 users, load average: 0.72, 1.00, 0.49 23:16:47 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:47 %Cpu(s): 10.3 us, 1.9 sy, 0.0 ni, 84.4 id, 3.3 wa, 0.0 hi, 0.0 si, 0.1 st 23:16:47 + echo 23:16:47 23:16:47 + sh -c 'free -h' 23:16:47 total used free shared buff/cache available 23:16:47 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 23:16:47 Swap: 1.0G 0B 1.0G 23:16:47 + echo 23:16:47 23:16:47 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:47 NAMES STATUS 23:16:47 policy-apex-pdp Up 2 minutes 23:16:47 policy-pap Up 2 minutes 23:16:47 policy-api Up 2 minutes 23:16:47 kafka Up 2 minutes 23:16:47 grafana Up 2 minutes 23:16:47 compose_zookeeper_1 Up 2 minutes 23:16:47 simulator Up 2 minutes 23:16:47 prometheus Up 2 minutes 23:16:47 mariadb Up 2 minutes 23:16:47 + echo 23:16:47 23:16:47 + docker stats --no-stream 23:16:50 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:50 3e9ab3cda2b9 policy-apex-pdp 0.53% 193.3MiB / 31.41GiB 0.60% 57.7kB / 92.3kB 0B / 0B 52 23:16:50 bb1110758709 policy-pap 0.61% 541.2MiB / 31.41GiB 1.68% 2.33MB / 818kB 0B / 153MB 65 23:16:50 61e815c36e15 policy-api 0.09% 462.7MiB / 31.41GiB 1.44% 2.49MB / 1.27MB 0B / 0B 56 23:16:50 29a4774add15 kafka 9.33% 386.3MiB / 31.41GiB 1.20% 242kB / 217kB 0B / 573kB 85 23:16:50 b868b67ee7c3 grafana 0.03% 65.3MiB / 31.41GiB 0.20% 19.9kB / 4.33kB 0B / 24MB 17 23:16:50 d1a6c9d635c2 compose_zookeeper_1 0.06% 100.7MiB / 31.41GiB 0.31% 59.4kB / 51.4kB 0B / 401kB 60 23:16:50 7503b809b89d simulator 0.10% 120.9MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 78 23:16:50 dc846f0f4702 prometheus 0.00% 25.43MiB / 31.41GiB 0.08% 192kB / 11.1kB 28.7kB / 0B 13 23:16:50 9a23d5aa5370 mariadb 0.01% 103.4MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11.1MB / 46.8MB 28 23:16:50 + echo 23:16:50 23:16:50 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:50 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:50 + relax_set 23:16:50 + set +e 23:16:50 + set +o pipefail 23:16:50 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:50 ++ echo 'Shut down started!' 23:16:50 Shut down started! 23:16:50 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:50 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:50 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:50 ++ source export-ports.sh 23:16:50 ++ source get-versions.sh 23:16:52 ++ echo 'Collecting logs from docker compose containers...' 23:16:52 Collecting logs from docker compose containers... 23:16:52 ++ docker-compose logs 23:16:54 ++ cat docker_compose.log 23:16:54 Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, kafka, grafana, compose_zookeeper_1, simulator, prometheus, mariadb 23:16:54 zookeeper_1 | ===> User 23:16:54 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:54 zookeeper_1 | ===> Configuring ... 23:16:54 zookeeper_1 | ===> Running preflight checks ... 23:16:54 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:54 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:54 zookeeper_1 | ===> Launching ... 23:16:54 zookeeper_1 | ===> Launching zookeeper ... 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,786] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,792] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,792] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,792] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,792] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,794] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,794] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,794] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,794] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,795] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,795] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,795] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,796] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,796] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,796] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,796] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,806] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,809] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,809] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,811] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,821] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,822] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:host.name=d1a6c9d635c2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,823] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,824] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,825] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,825] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,826] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,826] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,827] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,827] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,827] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,827] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:54 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:54 policy-apex-pdp | mariadb (172.17.0.4:3306) open 23:16:54 policy-apex-pdp | Waiting for kafka port 9092... 23:16:54 policy-apex-pdp | kafka (172.17.0.7:9092) open 23:16:54 policy-apex-pdp | Waiting for pap port 6969... 23:16:54 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:54 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.312+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.505+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:54 policy-apex-pdp | allow.auto.create.topics = true 23:16:54 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:54 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:54 policy-apex-pdp | auto.offset.reset = latest 23:16:54 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:54 policy-apex-pdp | check.crcs = true 23:16:54 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:54 policy-apex-pdp | client.id = consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-1 23:16:54 policy-apex-pdp | client.rack = 23:16:54 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:54 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:54 policy-apex-pdp | enable.auto.commit = true 23:16:54 policy-apex-pdp | exclude.internal.topics = true 23:16:54 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:54 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:54 policy-apex-pdp | fetch.min.bytes = 1 23:16:54 policy-apex-pdp | group.id = 2e9a8db0-5ced-4fac-ad85-e31c5601b919 23:16:54 policy-apex-pdp | group.instance.id = null 23:16:54 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:54 policy-apex-pdp | interceptor.classes = [] 23:16:54 policy-apex-pdp | internal.leave.group.on.close = true 23:16:54 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:54 policy-apex-pdp | isolation.level = read_uncommitted 23:16:54 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:54 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:54 policy-apex-pdp | max.poll.records = 500 23:16:54 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:54 policy-apex-pdp | metric.reporters = [] 23:16:54 policy-apex-pdp | metrics.num.samples = 2 23:16:54 policy-apex-pdp | metrics.recording.level = INFO 23:16:54 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:54 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:54 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:54 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:54 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:54 policy-apex-pdp | request.timeout.ms = 30000 23:16:54 policy-apex-pdp | retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:54 policy-apex-pdp | sasl.jaas.config = null 23:16:54 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,827] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,827] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,829] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,829] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,829] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,829] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,830] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,849] INFO Logging initialized @601ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,930] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,930] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,950] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,984] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,984] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,986] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,988] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:54 zookeeper_1 | [2024-02-27 23:14:18,996] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,015] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,015] INFO Started @767ms (org.eclipse.jetty.server.Server) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,015] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,020] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,021] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,023] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,025] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,038] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,038] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,040] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,040] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,043] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,044] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,046] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,047] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,047] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,055] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,055] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,068] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:54 zookeeper_1 | [2024-02-27 23:14:19,069] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:54 policy-db-migrator | Waiting for mariadb port 3306... 23:16:54 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:54 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:54 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:54 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:54 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:54 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:54 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 23:16:54 policy-db-migrator | 321 blocks 23:16:54 policy-db-migrator | Preparing upgrade release version: 0800 23:16:54 policy-db-migrator | Preparing upgrade release version: 0900 23:16:54 policy-db-migrator | Preparing upgrade release version: 1000 23:16:54 policy-db-migrator | Preparing upgrade release version: 1100 23:16:54 policy-db-migrator | Preparing upgrade release version: 1200 23:16:54 policy-db-migrator | Preparing upgrade release version: 1300 23:16:54 policy-db-migrator | Done 23:16:54 policy-db-migrator | name version 23:16:54 policy-db-migrator | policyadmin 0 23:16:54 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:54 policy-db-migrator | upgrade: 0 -> 1300 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 zookeeper_1 | [2024-02-27 23:14:20,151] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:54 policy-api | Waiting for mariadb port 3306... 23:16:54 policy-api | mariadb (172.17.0.4:3306) open 23:16:54 policy-api | Waiting for policy-db-migrator port 6824... 23:16:54 policy-api | policy-db-migrator (172.17.0.8:6824) open 23:16:54 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:54 policy-api | 23:16:54 policy-api | . ____ _ __ _ _ 23:16:54 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:54 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:54 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:54 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:54 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:54 policy-api | :: Spring Boot :: (v3.1.8) 23:16:54 policy-api | 23:16:54 policy-api | [2024-02-27T23:14:32.590+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:54 policy-api | [2024-02-27T23:14:32.591+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:54 policy-api | [2024-02-27T23:14:34.204+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:54 policy-api | [2024-02-27T23:14:34.297+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 84 ms. Found 6 JPA repository interfaces. 23:16:54 policy-api | [2024-02-27T23:14:34.678+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:54 policy-api | [2024-02-27T23:14:34.678+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:54 policy-api | [2024-02-27T23:14:35.319+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:54 policy-api | [2024-02-27T23:14:35.328+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:54 policy-api | [2024-02-27T23:14:35.330+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:54 policy-api | [2024-02-27T23:14:35.330+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:54 policy-api | [2024-02-27T23:14:35.417+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:54 policy-api | [2024-02-27T23:14:35.417+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2761 ms 23:16:54 policy-api | [2024-02-27T23:14:35.819+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-api | [2024-02-27T23:14:35.901+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:54 policy-api | [2024-02-27T23:14:35.904+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:54 policy-api | [2024-02-27T23:14:35.948+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:54 policy-api | [2024-02-27T23:14:36.286+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:54 policy-api | [2024-02-27T23:14:36.306+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:54 policy-api | [2024-02-27T23:14:36.399+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 23:16:54 policy-api | [2024-02-27T23:14:36.401+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:54 policy-api | [2024-02-27T23:14:38.176+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:54 policy-api | [2024-02-27T23:14:38.180+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:54 policy-api | [2024-02-27T23:14:39.158+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:54 policy-api | [2024-02-27T23:14:39.928+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:54 policy-api | [2024-02-27T23:14:41.022+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:54 policy-api | [2024-02-27T23:14:41.225+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@4bbb00a4, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@543d242e, org.springframework.security.web.context.SecurityContextHolderFilter@62c4ad40, org.springframework.security.web.header.HeaderWriterFilter@4567dcbc, org.springframework.security.web.authentication.logout.LogoutFilter@53d257e7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@58d291c1, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@9bc10bd, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2e26841f, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5f967ad3, org.springframework.security.web.access.ExceptionTranslationFilter@6aca85da, org.springframework.security.web.access.intercept.AuthorizationFilter@2f84848e] 23:16:54 policy-api | [2024-02-27T23:14:42.055+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:54 policy-api | [2024-02-27T23:14:42.166+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:54 policy-api | [2024-02-27T23:14:42.192+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:54 policy-api | [2024-02-27T23:14:42.212+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.343 seconds (process running for 10.963) 23:16:54 policy-api | [2024-02-27T23:15:01.203+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:54 policy-api | [2024-02-27T23:15:01.203+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:54 policy-api | [2024-02-27T23:15:01.205+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 23:16:54 policy-api | [2024-02-27T23:15:01.484+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:16:54 policy-api | [] 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:54 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:54 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:54 policy-apex-pdp | sasl.login.class = null 23:16:54 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:54 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:54 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:54 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:54 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:54 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:54 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:54 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:54 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:54 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:54 policy-apex-pdp | security.providers = null 23:16:54 policy-apex-pdp | send.buffer.bytes = 131072 23:16:54 policy-apex-pdp | session.timeout.ms = 45000 23:16:54 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:54 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:54 policy-apex-pdp | ssl.cipher.suites = null 23:16:54 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:54 policy-apex-pdp | ssl.engine.factory.class = null 23:16:54 policy-apex-pdp | ssl.key.password = null 23:16:54 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:54 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:54 policy-apex-pdp | ssl.keystore.key = null 23:16:54 policy-apex-pdp | ssl.keystore.location = null 23:16:54 policy-apex-pdp | ssl.keystore.password = null 23:16:54 policy-apex-pdp | ssl.keystore.type = JKS 23:16:54 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:54 policy-apex-pdp | ssl.provider = null 23:16:54 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:54 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:54 policy-apex-pdp | ssl.truststore.certificates = null 23:16:54 policy-apex-pdp | ssl.truststore.location = null 23:16:54 policy-apex-pdp | ssl.truststore.password = null 23:16:54 policy-apex-pdp | ssl.truststore.type = JKS 23:16:54 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 policy-apex-pdp | 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.662+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.662+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.662+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075695661 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.664+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-1, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Subscribed to topic(s): policy-pdp-pap 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.676+00:00|INFO|ServiceManager|main] service manager starting 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.677+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.680+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2e9a8db0-5ced-4fac-ad85-e31c5601b919, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.699+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:54 policy-apex-pdp | allow.auto.create.topics = true 23:16:54 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:54 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:54 policy-apex-pdp | auto.offset.reset = latest 23:16:54 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:54 policy-apex-pdp | check.crcs = true 23:16:54 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:54 policy-apex-pdp | client.id = consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2 23:16:54 policy-apex-pdp | client.rack = 23:16:54 kafka | ===> User 23:16:54 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:54 kafka | ===> Configuring ... 23:16:54 kafka | Running in Zookeeper mode... 23:16:54 kafka | ===> Running preflight checks ... 23:16:54 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:54 kafka | ===> Check if Zookeeper is healthy ... 23:16:54 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:54 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:54 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:54 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:54 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:54 kafka | [2024-02-27 23:14:20,079] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:host.name=29a4774add15 (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:54 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:54 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:54 policy-apex-pdp | enable.auto.commit = true 23:16:54 policy-apex-pdp | exclude.internal.topics = true 23:16:54 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:54 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:54 policy-apex-pdp | fetch.min.bytes = 1 23:16:54 policy-apex-pdp | group.id = 2e9a8db0-5ced-4fac-ad85-e31c5601b919 23:16:54 policy-apex-pdp | group.instance.id = null 23:16:54 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:54 policy-apex-pdp | interceptor.classes = [] 23:16:54 policy-apex-pdp | internal.leave.group.on.close = true 23:16:54 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:54 policy-apex-pdp | isolation.level = read_uncommitted 23:16:54 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:54 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:54 policy-apex-pdp | max.poll.records = 500 23:16:54 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:54 policy-apex-pdp | metric.reporters = [] 23:16:54 policy-apex-pdp | metrics.num.samples = 2 23:16:54 policy-apex-pdp | metrics.recording.level = INFO 23:16:54 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:54 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:54 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:54 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:54 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:54 policy-apex-pdp | request.timeout.ms = 30000 23:16:54 policy-apex-pdp | retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:54 policy-apex-pdp | sasl.jaas.config = null 23:16:54 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:54 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:54 policy-apex-pdp | sasl.login.class = null 23:16:54 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:54 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:54 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:54 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:54 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:54 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:54 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:54 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:54 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:54 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:54 policy-apex-pdp | security.providers = null 23:16:54 policy-apex-pdp | send.buffer.bytes = 131072 23:16:54 policy-apex-pdp | session.timeout.ms = 45000 23:16:54 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:54 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:54 policy-apex-pdp | ssl.cipher.suites = null 23:16:54 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:54 policy-apex-pdp | ssl.engine.factory.class = null 23:16:54 policy-apex-pdp | ssl.key.password = null 23:16:54 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:54 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:54 policy-apex-pdp | ssl.keystore.key = null 23:16:54 policy-apex-pdp | ssl.keystore.location = null 23:16:54 policy-apex-pdp | ssl.keystore.password = null 23:16:54 policy-apex-pdp | ssl.keystore.type = JKS 23:16:54 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:54 policy-apex-pdp | ssl.provider = null 23:16:54 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:54 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:54 policy-apex-pdp | ssl.truststore.certificates = null 23:16:54 policy-apex-pdp | ssl.truststore.location = null 23:16:54 policy-apex-pdp | ssl.truststore.password = null 23:16:54 policy-apex-pdp | ssl.truststore.type = JKS 23:16:54 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 policy-apex-pdp | 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.707+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.707+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.708+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075695707 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.708+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Subscribed to topic(s): policy-pdp-pap 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.708+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c0557bac-bd36-41e1-a310-212563aafcf1, alive=false, publisher=null]]: starting 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.720+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:54 policy-apex-pdp | acks = -1 23:16:54 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:54 policy-apex-pdp | batch.size = 16384 23:16:54 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:54 policy-apex-pdp | buffer.memory = 33554432 23:16:54 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:54 policy-apex-pdp | client.id = producer-1 23:16:54 policy-apex-pdp | compression.type = none 23:16:54 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:54 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:54 policy-apex-pdp | enable.idempotence = true 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,080] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,083] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@2fd6b6c7 (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:20,087] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:54 kafka | [2024-02-27 23:14:20,091] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:54 kafka | [2024-02-27 23:14:20,103] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:20,115] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:20,116] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:20,130] INFO Socket connection established, initiating session, client: /172.17.0.7:54322, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:20,167] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003be1d0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:20,288] INFO Session: 0x1000003be1d0000 closed (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:54 kafka | ===> Launching ... 23:16:54 kafka | ===> Launching kafka ... 23:16:54 kafka | [2024-02-27 23:14:21,053] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:54 kafka | [2024-02-27 23:14:21,438] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:54 kafka | [2024-02-27 23:14:21,516] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:54 kafka | [2024-02-27 23:14:21,517] INFO starting (kafka.server.KafkaServer) 23:16:54 kafka | [2024-02-27 23:14:21,517] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:54 kafka | [2024-02-27 23:14:21,531] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:host.name=29a4774add15 (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:54 policy-apex-pdp | interceptor.classes = [] 23:16:54 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:54 policy-apex-pdp | linger.ms = 0 23:16:54 policy-apex-pdp | max.block.ms = 60000 23:16:54 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:54 policy-apex-pdp | max.request.size = 1048576 23:16:54 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:54 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:54 policy-apex-pdp | metric.reporters = [] 23:16:54 policy-apex-pdp | metrics.num.samples = 2 23:16:54 policy-apex-pdp | metrics.recording.level = INFO 23:16:54 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:54 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:54 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:54 policy-apex-pdp | partitioner.class = null 23:16:54 policy-apex-pdp | partitioner.ignore.keys = false 23:16:54 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:54 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:54 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:54 policy-apex-pdp | request.timeout.ms = 30000 23:16:54 policy-apex-pdp | retries = 2147483647 23:16:54 policy-apex-pdp | retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:54 policy-apex-pdp | sasl.jaas.config = null 23:16:54 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:54 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:54 policy-apex-pdp | sasl.login.class = null 23:16:54 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:54 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:54 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:54 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:54 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:54 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:54 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:54 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:54 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:54 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,535] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,536] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,536] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,536] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,536] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,536] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,536] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,536] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,536] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,538] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5b619d14 (org.apache.zookeeper.ZooKeeper) 23:16:54 kafka | [2024-02-27 23:14:21,541] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:54 kafka | [2024-02-27 23:14:21,548] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:21,549] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:54 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:54 policy-apex-pdp | security.providers = null 23:16:54 policy-apex-pdp | send.buffer.bytes = 131072 23:16:54 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:54 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:54 policy-apex-pdp | ssl.cipher.suites = null 23:16:54 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:54 policy-apex-pdp | ssl.engine.factory.class = null 23:16:54 policy-apex-pdp | ssl.key.password = null 23:16:54 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:54 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:54 policy-apex-pdp | ssl.keystore.key = null 23:16:54 policy-apex-pdp | ssl.keystore.location = null 23:16:54 policy-apex-pdp | ssl.keystore.password = null 23:16:54 policy-apex-pdp | ssl.keystore.type = JKS 23:16:54 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:54 policy-apex-pdp | ssl.provider = null 23:16:54 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:54 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:54 policy-apex-pdp | ssl.truststore.certificates = null 23:16:54 policy-apex-pdp | ssl.truststore.location = null 23:16:54 policy-apex-pdp | ssl.truststore.password = null 23:16:54 policy-apex-pdp | ssl.truststore.type = JKS 23:16:54 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:54 policy-apex-pdp | transactional.id = null 23:16:54 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:54 policy-apex-pdp | 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.728+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.743+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.743+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.743+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075695743 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.744+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c0557bac-bd36-41e1-a310-212563aafcf1, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.744+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.744+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.746+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.746+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.747+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.748+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.753+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.754+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2e9a8db0-5ced-4fac-ad85-e31c5601b919, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.754+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2e9a8db0-5ced-4fac-ad85-e31c5601b919, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.755+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.765+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:54 policy-apex-pdp | [] 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.766+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"57c9b11d-696e-47af-ba9c-ef387684de94","timestampMs":1709075695753,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.938+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.938+00:00|INFO|ServiceManager|main] service manager starting 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.938+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.938+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.949+00:00|INFO|ServiceManager|main] service manager started 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.949+00:00|INFO|ServiceManager|main] service manager started 23:16:54 mariadb | 2024-02-27 23:14:17+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:54 mariadb | 2024-02-27 23:14:17+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:54 mariadb | 2024-02-27 23:14:17+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:54 mariadb | 2024-02-27 23:14:17+00:00 [Note] [Entrypoint]: Initializing database files 23:16:54 mariadb | 2024-02-27 23:14:17 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:54 mariadb | 2024-02-27 23:14:17 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:54 mariadb | 2024-02-27 23:14:17 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:54 mariadb | 23:16:54 mariadb | 23:16:54 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:54 mariadb | To do so, start the server, then issue the following command: 23:16:54 mariadb | 23:16:54 mariadb | '/usr/bin/mysql_secure_installation' 23:16:54 mariadb | 23:16:54 mariadb | which will also give you the option of removing the test 23:16:54 mariadb | databases and anonymous user created by default. This is 23:16:54 mariadb | strongly recommended for production servers. 23:16:54 mariadb | 23:16:54 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:54 mariadb | 23:16:54 mariadb | Please report any problems at https://mariadb.org/jira 23:16:54 mariadb | 23:16:54 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:54 mariadb | 23:16:54 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:54 mariadb | https://mariadb.org/get-involved/ 23:16:54 mariadb | 23:16:54 mariadb | 2024-02-27 23:14:19+00:00 [Note] [Entrypoint]: Database files initialized 23:16:54 mariadb | 2024-02-27 23:14:19+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:54 mariadb | 2024-02-27 23:14:19+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.949+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 policy-apex-pdp | [2024-02-27T23:14:55.951+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.051+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 1E2F8WPXTiubxm6qH6MBlQ 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.051+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Cluster ID: 1E2F8WPXTiubxm6qH6MBlQ 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.053+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.054+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.060+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] (Re-)joining group 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.079+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Request joining group due to: need to re-join with the given member-id: consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2-b8f1b9fb-2b36-44b6-b54b-f1ddb8d6f785 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.079+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.079+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] (Re-)joining group 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.545+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.546+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:54 policy-apex-pdp | [2024-02-27T23:14:56.701+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.2 - policyadmin [27/Feb/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.50.1" 23:16:54 policy-apex-pdp | [2024-02-27T23:14:59.083+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Successfully joined group with generation Generation{generationId=1, memberId='consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2-b8f1b9fb-2b36-44b6-b54b-f1ddb8d6f785', protocol='range'} 23:16:54 policy-apex-pdp | [2024-02-27T23:14:59.093+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Finished assignment for group at generation 1: {consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2-b8f1b9fb-2b36-44b6-b54b-f1ddb8d6f785=Assignment(partitions=[policy-pdp-pap-0])} 23:16:54 policy-apex-pdp | [2024-02-27T23:14:59.100+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Successfully synced group in generation Generation{generationId=1, memberId='consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2-b8f1b9fb-2b36-44b6-b54b-f1ddb8d6f785', protocol='range'} 23:16:54 policy-apex-pdp | [2024-02-27T23:14:59.101+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:54 policy-apex-pdp | [2024-02-27T23:14:59.102+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Adding newly assigned partitions: policy-pdp-pap-0 23:16:54 policy-apex-pdp | [2024-02-27T23:14:59.108+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Found no committed offset for partition policy-pdp-pap-0 23:16:54 policy-apex-pdp | [2024-02-27T23:14:59.118+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2, groupId=2e9a8db0-5ced-4fac-ad85-e31c5601b919] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.750+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"03fe34c6-820b-42c9-831a-589ef163ea8f","timestampMs":1709075715750,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.772+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"03fe34c6-820b-42c9-831a-589ef163ea8f","timestampMs":1709075715750,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.774+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.908+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"6eb49d91-6114-4186-abb4-512213842060","timestampMs":1709075715853,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.915+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c73643ec-a1e5-42d7-a6f8-df8401235102","timestampMs":1709075715914,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.915+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.916+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6eb49d91-6114-4186-abb4-512213842060","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"99736685-5d25-4f97-a366-4a5391bf9535","timestampMs":1709075715915,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.931+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c73643ec-a1e5-42d7-a6f8-df8401235102","timestampMs":1709075715914,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.931+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: Number of transaction pools: 1 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: 128 rollback segments are active. 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:54 mariadb | 2024-02-27 23:14:19 0 [Note] mariadbd: ready for connections. 23:16:54 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:54 mariadb | 2024-02-27 23:14:20+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:54 mariadb | 2024-02-27 23:14:22+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:54 mariadb | 2024-02-27 23:14:22+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:54 mariadb | 23:16:54 mariadb | 2024-02-27 23:14:22+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:54 mariadb | 23:16:54 mariadb | 2024-02-27 23:14:22+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:54 mariadb | #!/bin/bash -xv 23:16:54 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:54 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:54 mariadb | # 23:16:54 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:54 mariadb | # you may not use this file except in compliance with the License. 23:16:54 mariadb | # You may obtain a copy of the License at 23:16:54 mariadb | # 23:16:54 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:54 mariadb | # 23:16:54 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:54 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:54 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:54 mariadb | # See the License for the specific language governing permissions and 23:16:54 mariadb | # limitations under the License. 23:16:54 mariadb | 23:16:54 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:54 mariadb | do 23:16:54 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:54 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:54 mariadb | done 23:16:54 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:54 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:54 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:54 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:54 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:54 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:54 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:54 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:54 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:54 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:54 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:54 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:54 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:54 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:54 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:54 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:54 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:54 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:54 mariadb | 23:16:54 kafka | [2024-02-27 23:14:21,553] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:21,562] INFO Socket connection established, initiating session, client: /172.17.0.7:54324, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:21,569] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003be1d0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:54 kafka | [2024-02-27 23:14:21,574] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:54 kafka | [2024-02-27 23:14:22,006] INFO Cluster ID = 1E2F8WPXTiubxm6qH6MBlQ (kafka.server.KafkaServer) 23:16:54 kafka | [2024-02-27 23:14:22,009] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:54 kafka | [2024-02-27 23:14:22,056] INFO KafkaConfig values: 23:16:54 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:54 kafka | alter.config.policy.class.name = null 23:16:54 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:54 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:54 kafka | authorizer.class.name = 23:16:54 kafka | auto.create.topics.enable = true 23:16:54 kafka | auto.include.jmx.reporter = true 23:16:54 kafka | auto.leader.rebalance.enable = true 23:16:54 kafka | background.threads = 10 23:16:54 kafka | broker.heartbeat.interval.ms = 2000 23:16:54 kafka | broker.id = 1 23:16:54 kafka | broker.id.generation.enable = true 23:16:54 kafka | broker.rack = null 23:16:54 kafka | broker.session.timeout.ms = 9000 23:16:54 kafka | client.quota.callback.class = null 23:16:54 kafka | compression.type = producer 23:16:54 kafka | connection.failed.authentication.delay.ms = 100 23:16:54 kafka | connections.max.idle.ms = 600000 23:16:54 kafka | connections.max.reauth.ms = 0 23:16:54 kafka | control.plane.listener.name = null 23:16:54 kafka | controlled.shutdown.enable = true 23:16:54 kafka | controlled.shutdown.max.retries = 3 23:16:54 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:54 kafka | controller.listener.names = null 23:16:54 kafka | controller.quorum.append.linger.ms = 25 23:16:54 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:54 kafka | controller.quorum.election.timeout.ms = 1000 23:16:54 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:54 kafka | controller.quorum.request.timeout.ms = 2000 23:16:54 kafka | controller.quorum.retry.backoff.ms = 20 23:16:54 kafka | controller.quorum.voters = [] 23:16:54 kafka | controller.quota.window.num = 11 23:16:54 kafka | controller.quota.window.size.seconds = 1 23:16:54 kafka | controller.socket.timeout.ms = 30000 23:16:54 kafka | create.topic.policy.class.name = null 23:16:54 kafka | default.replication.factor = 1 23:16:54 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:54 kafka | delegation.token.expiry.time.ms = 86400000 23:16:54 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:54 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:54 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:54 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:54 mariadb | 23:16:54 mariadb | 2024-02-27 23:14:23+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Starting shutdown... 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Buffer pool(s) dump completed at 240227 23:14:23 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Shutdown completed; log sequence number 331846; transaction id 298 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] mariadbd: Shutdown complete 23:16:54 mariadb | 23:16:54 mariadb | 2024-02-27 23:14:23+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:54 mariadb | 23:16:54 mariadb | 2024-02-27 23:14:23+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:54 mariadb | 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Number of transaction pools: 1 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: 128 rollback segments are active. 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: log sequence number 331846; transaction id 299 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] Server socket created on IP: '::'. 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] mariadbd: ready for connections. 23:16:54 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:54 mariadb | 2024-02-27 23:14:23 0 [Note] InnoDB: Buffer pool(s) load completed at 240227 23:14:23 23:16:54 mariadb | 2024-02-27 23:14:24 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:54 mariadb | 2024-02-27 23:14:24 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:54 mariadb | 2024-02-27 23:14:24 9 [Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:54 mariadb | 2024-02-27 23:14:24 25 [Warning] Aborted connection 25 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-db-migrator | 23:16:54 policy-db-migrator | 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.937+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6eb49d91-6114-4186-abb4-512213842060","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"99736685-5d25-4f97-a366-4a5391bf9535","timestampMs":1709075715915,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.937+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.947+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2bf324-be4c-4c28-974f-549c594c5dc4","timestampMs":1709075715854,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.950+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"8c2bf324-be4c-4c28-974f-549c594c5dc4","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"42f81e5d-a936-45f9-9d7c-20ab91668a9d","timestampMs":1709075715949,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.958+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"8c2bf324-be4c-4c28-974f-549c594c5dc4","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"42f81e5d-a936-45f9-9d7c-20ab91668a9d","timestampMs":1709075715949,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:15.958+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:54 policy-apex-pdp | [2024-02-27T23:15:16.008+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e52ab644-6231-4e4d-bad3-1f9b282d83a5","timestampMs":1709075715974,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:16.009+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e52ab644-6231-4e4d-bad3-1f9b282d83a5","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f206ef84-db1f-46b7-8de2-6ff01d93655f","timestampMs":1709075716009,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:16.017+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e52ab644-6231-4e4d-bad3-1f9b282d83a5","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f206ef84-db1f-46b7-8de2-6ff01d93655f","timestampMs":1709075716009,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 policy-apex-pdp | [2024-02-27T23:15:16.017+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:54 policy-apex-pdp | [2024-02-27T23:15:56.084+00:00|INFO|RequestLog|qtp1068445309-27] 172.17.0.2 - policyadmin [27/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10640 "-" "Prometheus/2.50.1" 23:16:54 prometheus | ts=2024-02-27T23:14:13.137Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:54 prometheus | ts=2024-02-27T23:14:13.137Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" 23:16:54 prometheus | ts=2024-02-27T23:14:13.137Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" 23:16:54 prometheus | ts=2024-02-27T23:14:13.137Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:54 prometheus | ts=2024-02-27T23:14:13.137Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:54 prometheus | ts=2024-02-27T23:14:13.137Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:54 prometheus | ts=2024-02-27T23:14:13.140Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:54 prometheus | ts=2024-02-27T23:14:13.140Z caller=main.go:1118 level=info msg="Starting TSDB ..." 23:16:54 prometheus | ts=2024-02-27T23:14:13.150Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:54 prometheus | ts=2024-02-27T23:14:13.150Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:54 prometheus | ts=2024-02-27T23:14:13.150Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:54 prometheus | ts=2024-02-27T23:14:13.150Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.39µs 23:16:54 prometheus | ts=2024-02-27T23:14:13.150Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:54 prometheus | ts=2024-02-27T23:14:13.151Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:54 prometheus | ts=2024-02-27T23:14:13.151Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=68.024µs wal_replay_duration=415.768µs wbl_replay_duration=340ns total_replay_duration=675.59µs 23:16:54 prometheus | ts=2024-02-27T23:14:13.153Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC 23:16:54 prometheus | ts=2024-02-27T23:14:13.153Z caller=main.go:1142 level=info msg="TSDB started" 23:16:54 prometheus | ts=2024-02-27T23:14:13.153Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:54 prometheus | ts=2024-02-27T23:14:13.154Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=964.262µs db_storage=1.59µs remote_storage=1.79µs web_handler=910ns query_engine=1.12µs scrape=189.068µs scrape_sd=124.176µs notify=38.741µs notify_sd=11.601µs rules=2.05µs tracing=8.48µs 23:16:54 prometheus | ts=2024-02-27T23:14:13.154Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." 23:16:54 prometheus | ts=2024-02-27T23:14:13.154Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:54 policy-pap | Waiting for mariadb port 3306... 23:16:54 policy-pap | mariadb (172.17.0.4:3306) open 23:16:54 policy-pap | Waiting for kafka port 9092... 23:16:54 policy-pap | kafka (172.17.0.7:9092) open 23:16:54 policy-pap | Waiting for api port 6969... 23:16:54 policy-pap | api (172.17.0.9:6969) open 23:16:54 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:54 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:54 policy-pap | 23:16:54 policy-pap | . ____ _ __ _ _ 23:16:54 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:54 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:54 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:54 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:54 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:54 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:54 policy-pap | 23:16:54 policy-pap | [2024-02-27T23:14:44.545+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:54 policy-pap | [2024-02-27T23:14:44.547+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:54 policy-pap | [2024-02-27T23:14:46.400+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:54 policy-pap | [2024-02-27T23:14:46.518+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 106 ms. Found 7 JPA repository interfaces. 23:16:54 policy-pap | [2024-02-27T23:14:46.958+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:54 policy-pap | [2024-02-27T23:14:46.958+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:54 policy-pap | [2024-02-27T23:14:47.626+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:54 policy-pap | [2024-02-27T23:14:47.636+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:54 policy-pap | [2024-02-27T23:14:47.639+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:54 policy-pap | [2024-02-27T23:14:47.639+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:54 policy-pap | [2024-02-27T23:14:47.736+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:54 policy-pap | [2024-02-27T23:14:47.736+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3108 ms 23:16:54 policy-pap | [2024-02-27T23:14:48.180+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:54 policy-pap | [2024-02-27T23:14:48.281+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:54 policy-pap | [2024-02-27T23:14:48.285+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:54 policy-pap | [2024-02-27T23:14:48.330+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:54 policy-pap | [2024-02-27T23:14:48.694+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:54 policy-pap | [2024-02-27T23:14:48.715+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:54 policy-pap | [2024-02-27T23:14:48.842+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@124ac145 23:16:54 policy-pap | [2024-02-27T23:14:48.844+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:54 policy-pap | [2024-02-27T23:14:50.775+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:54 policy-pap | [2024-02-27T23:14:50.779+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:54 policy-pap | [2024-02-27T23:14:51.308+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:54 policy-pap | [2024-02-27T23:14:51.809+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:54 policy-pap | [2024-02-27T23:14:51.922+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:54 policy-pap | [2024-02-27T23:14:52.180+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:54 policy-pap | allow.auto.create.topics = true 23:16:54 policy-pap | auto.commit.interval.ms = 5000 23:16:54 policy-pap | auto.include.jmx.reporter = true 23:16:54 policy-pap | auto.offset.reset = latest 23:16:54 policy-pap | bootstrap.servers = [kafka:9092] 23:16:54 policy-pap | check.crcs = true 23:16:54 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:54 policy-pap | client.id = consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-1 23:16:54 policy-pap | client.rack = 23:16:54 policy-pap | connections.max.idle.ms = 540000 23:16:54 policy-pap | default.api.timeout.ms = 60000 23:16:54 policy-pap | enable.auto.commit = true 23:16:54 policy-pap | exclude.internal.topics = true 23:16:54 policy-pap | fetch.max.bytes = 52428800 23:16:54 policy-pap | fetch.max.wait.ms = 500 23:16:54 policy-pap | fetch.min.bytes = 1 23:16:54 policy-pap | group.id = fd3c6b2f-e961-4dee-b92a-5df6cab870fa 23:16:54 policy-pap | group.instance.id = null 23:16:54 policy-pap | heartbeat.interval.ms = 3000 23:16:54 policy-pap | interceptor.classes = [] 23:16:54 policy-pap | internal.leave.group.on.close = true 23:16:54 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:54 kafka | delegation.token.master.key = null 23:16:54 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:54 policy-pap | isolation.level = read_uncommitted 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095244057Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-27T23:14:14Z 23:16:54 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095444488Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095455529Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:54 kafka | delegation.token.secret.key = null 23:16:54 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:54 policy-pap | max.partition.fetch.bytes = 1048576 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095459109Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:54 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:54 simulator | overriding logback.xml 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | max.poll.interval.ms = 300000 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095462259Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:54 simulator | 2024-02-27 23:14:19,693 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:54 policy-db-migrator | 23:16:54 policy-pap | max.poll.records = 500 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095465359Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:54 kafka | delete.topic.enable = true 23:16:54 simulator | 2024-02-27 23:14:19,749 INFO org.onap.policy.models.simulators starting 23:16:54 policy-db-migrator | 23:16:54 policy-pap | metadata.max.age.ms = 300000 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095469409Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:54 kafka | early.start.listeners = null 23:16:54 simulator | 2024-02-27 23:14:19,749 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:54 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:54 policy-pap | metric.reporters = [] 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095472429Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:54 kafka | fetch.max.bytes = 57671680 23:16:54 simulator | 2024-02-27 23:14:19,932 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | metrics.num.samples = 2 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.09547599Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:54 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:54 simulator | 2024-02-27 23:14:19,933 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 policy-pap | metrics.recording.level = INFO 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.09547957Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:54 simulator | 2024-02-27 23:14:20,040 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | metrics.sample.window.ms = 30000 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.09548399Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:54 simulator | 2024-02-27 23:14:20,051 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 policy-db-migrator | 23:16:54 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.0954875Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:54 simulator | 2024-02-27 23:14:20,061 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 policy-db-migrator | 23:16:54 policy-pap | receive.buffer.bytes = 65536 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.09549143Z level=info msg=Target target=[all] 23:16:54 simulator | 2024-02-27 23:14:20,066 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:54 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:54 policy-pap | reconnect.backoff.max.ms = 1000 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095498511Z level=info msg="Path Home" path=/usr/share/grafana 23:16:54 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:54 simulator | 2024-02-27 23:14:20,149 INFO Session workerName=node0 23:16:54 policy-pap | reconnect.backoff.ms = 50 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:20,767 INFO Using GSON for REST calls 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095502331Z level=info msg="Path Data" path=/var/lib/grafana 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:54 simulator | 2024-02-27 23:14:20,846 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 23:16:54 policy-pap | request.timeout.ms = 30000 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095506641Z level=info msg="Path Logs" path=/var/log/grafana 23:16:54 simulator | 2024-02-27 23:14:20,857 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:54 policy-pap | retry.backoff.ms = 100 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:20,864 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1675ms 23:16:54 policy-pap | sasl.client.callback.handler.class = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095510751Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:54 policy-db-migrator | 23:16:54 simulator | 2024-02-27 23:14:20,865 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4196 ms. 23:16:54 policy-pap | sasl.jaas.config = null 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095513952Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:54 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:54 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:54 simulator | 2024-02-27 23:14:20,869 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:54 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 grafana | logger=settings t=2024-02-27T23:14:14.095517132Z level=info msg="App mode production" 23:16:54 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:20,872 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:54 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 grafana | logger=sqlstore t=2024-02-27T23:14:14.095818718Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:54 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:54 simulator | 2024-02-27 23:14:20,875 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 policy-pap | sasl.kerberos.service.name = null 23:16:54 grafana | logger=sqlstore t=2024-02-27T23:14:14.095838889Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:54 kafka | group.consumer.max.size = 2147483647 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:20,876 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.096427241Z level=info msg="Starting DB migrations" 23:16:54 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:54 policy-db-migrator | 23:16:54 simulator | 2024-02-27 23:14:20,877 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:54 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.097437345Z level=info msg="Executing migration" id="create migration_log table" 23:16:54 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:54 policy-db-migrator | 23:16:54 simulator | 2024-02-27 23:14:20,889 INFO Session workerName=node0 23:16:54 policy-pap | sasl.login.callback.handler.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.098210588Z level=info msg="Migration successfully executed" id="create migration_log table" duration=772.743µs 23:16:54 kafka | group.consumer.session.timeout.ms = 45000 23:16:54 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:54 simulator | 2024-02-27 23:14:20,990 INFO Using GSON for REST calls 23:16:54 policy-pap | sasl.login.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.102807946Z level=info msg="Executing migration" id="create user table" 23:16:54 kafka | group.coordinator.new.enable = false 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:21,005 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 23:16:54 policy-pap | sasl.login.connect.timeout.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.103468232Z level=info msg="Migration successfully executed" id="create user table" duration=660.036µs 23:16:54 kafka | group.coordinator.threads = 1 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 simulator | 2024-02-27 23:14:21,007 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:54 policy-pap | sasl.login.read.timeout.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.106880167Z level=info msg="Executing migration" id="add unique index user.login" 23:16:54 kafka | group.initial.rebalance.delay.ms = 3000 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:21,007 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1817ms 23:16:54 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.107578464Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=697.757µs 23:16:54 kafka | group.max.session.timeout.ms = 1800000 23:16:54 policy-db-migrator | 23:16:54 simulator | 2024-02-27 23:14:21,007 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4869 ms. 23:16:54 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.111077923Z level=info msg="Executing migration" id="add unique index user.email" 23:16:54 kafka | group.max.size = 2147483647 23:16:54 policy-db-migrator | 23:16:54 simulator | 2024-02-27 23:14:21,008 INFO org.onap.policy.models.simulators starting SO simulator 23:16:54 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.112100089Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.023616ms 23:16:54 kafka | group.min.session.timeout.ms = 6000 23:16:54 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:54 simulator | 2024-02-27 23:14:21,011 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:54 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.117729964Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:21,012 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.118750819Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.020255ms 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:54 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.122224677Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:54 simulator | 2024-02-27 23:14:21,013 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 kafka | initial.broker.registration.timeout.ms = 60000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.12283586Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=610.932µs 23:16:54 policy-pap | sasl.mechanism = GSSAPI 23:16:54 simulator | 2024-02-27 23:14:21,014 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:54 kafka | inter.broker.listener.name = PLAINTEXT 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.12727382Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:54 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 simulator | 2024-02-27 23:14:21,032 INFO Session workerName=node0 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.13115551Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.87914ms 23:16:54 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:54 simulator | 2024-02-27 23:14:21,093 INFO Using GSON for REST calls 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.136702Z level=info msg="Executing migration" id="create user table v2" 23:16:54 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:54 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:54 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:54 simulator | 2024-02-27 23:14:21,111 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.137567397Z level=info msg="Migration successfully executed" id="create user table v2" duration=866.387µs 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 kafka | kafka.metrics.polling.interval.secs = 10 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:21,123 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.140459154Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:54 simulator | 2024-02-27 23:14:21,124 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1934ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.141198224Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=735.299µs 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 simulator | 2024-02-27 23:14:21,124 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4888 ms. 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.144878743Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.145602102Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=723.11µs 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-db-migrator | 23:16:54 simulator | 2024-02-27 23:14:21,125 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.150982413Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:54 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:54 simulator | 2024-02-27 23:14:21,130 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.151413327Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=430.604µs 23:16:54 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:54 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-db-migrator | -------------- 23:16:54 simulator | 2024-02-27 23:14:21,131 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.154709135Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:54 simulator | 2024-02-27 23:14:21,132 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:54 policy-pap | security.protocol = PLAINTEXT 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.155472156Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=762.471µs 23:16:54 simulator | 2024-02-27 23:14:21,133 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.159388448Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:54 kafka | kafka.metrics.reporters = [] 23:16:54 policy-pap | security.providers = null 23:16:54 simulator | 2024-02-27 23:14:21,137 INFO Session workerName=node0 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.161101091Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.711763ms 23:16:54 kafka | leader.imbalance.check.interval.seconds = 300 23:16:54 policy-pap | send.buffer.bytes = 131072 23:16:54 simulator | 2024-02-27 23:14:21,186 INFO Using GSON for REST calls 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.164569898Z level=info msg="Executing migration" id="Update user table charset" 23:16:54 kafka | leader.imbalance.per.broker.percentage = 10 23:16:54 policy-pap | session.timeout.ms = 45000 23:16:54 simulator | 2024-02-27 23:14:21,198 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 23:16:54 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.16461035Z level=info msg="Migration successfully executed" id="Update user table charset" duration=41.162µs 23:16:54 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:54 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:54 simulator | 2024-02-27 23:14:21,204 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.169795231Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:54 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:54 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:54 simulator | 2024-02-27 23:14:21,205 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @2015ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.171305953Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.508492ms 23:16:54 kafka | log.cleaner.backoff.ms = 15000 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 policy-pap | ssl.cipher.suites = null 23:16:54 simulator | 2024-02-27 23:14:21,205 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4927 ms. 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.174936699Z level=info msg="Executing migration" id="Add missing user data" 23:16:54 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 simulator | 2024-02-27 23:14:21,206 INFO org.onap.policy.models.simulators started 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.175263207Z level=info msg="Migration successfully executed" id="Add missing user data" duration=322.008µs 23:16:54 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.17864009Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.engine.factory.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.17993354Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.294271ms 23:16:54 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:54 policy-pap | ssl.key.password = null 23:16:54 kafka | log.cleaner.enable = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.183097621Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:54 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.183804989Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=706.538µs 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:54 policy-pap | ssl.keystore.certificate.chain = null 23:16:54 kafka | log.cleaner.io.buffer.size = 524288 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.189329818Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | ssl.keystore.key = null 23:16:54 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.191130526Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.800328ms 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.keystore.location = null 23:16:54 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.194567642Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.keystore.password = null 23:16:54 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.205753937Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.186785ms 23:16:54 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:54 policy-pap | ssl.keystore.type = JKS 23:16:54 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.209000173Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | ssl.protocol = TLSv1.3 23:16:54 kafka | log.cleaner.threads = 1 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.209493009Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=490.666µs 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 policy-pap | ssl.provider = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.215111963Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | ssl.secure.random.implementation = null 23:16:54 kafka | log.cleanup.policy = [delete] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.216157561Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.036287ms 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:54 kafka | log.dir = /tmp/kafka-logs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.219816298Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.truststore.certificates = null 23:16:54 kafka | log.dirs = /var/lib/kafka/data 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.220928508Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.11903ms 23:16:54 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:54 policy-pap | ssl.truststore.location = null 23:16:54 kafka | log.flush.interval.messages = 9223372036854775807 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.224558475Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | ssl.truststore.password = null 23:16:54 kafka | log.flush.interval.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.225250672Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=691.507µs 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:54 policy-pap | ssl.truststore.type = JKS 23:16:54 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.230951861Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.23168136Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=729.139µs 23:16:54 policy-db-migrator | 23:16:54 policy-pap | 23:16:54 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.235200381Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | [2024-02-27T23:14:52.338+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 kafka | log.index.interval.bytes = 4096 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.235236473Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=37.452µs 23:16:54 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:54 policy-pap | [2024-02-27T23:14:52.338+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 kafka | log.index.size.max.bytes = 10485760 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.238845928Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | [2024-02-27T23:14:52.338+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075692337 23:16:54 kafka | log.local.retention.bytes = -2 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.239854342Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.007974ms 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:54 policy-pap | [2024-02-27T23:14:52.340+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-1, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Subscribed to topic(s): policy-pdp-pap 23:16:54 kafka | log.local.retention.ms = -2 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.24609244Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | [2024-02-27T23:14:52.341+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:54 kafka | log.message.downconversion.enable = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.246759607Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=661.816µs 23:16:54 policy-db-migrator | 23:16:54 policy-pap | allow.auto.create.topics = true 23:16:54 kafka | log.message.format.version = 3.0-IV1 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.250225444Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | auto.commit.interval.ms = 5000 23:16:54 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.25126201Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.034295ms 23:16:54 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:54 policy-pap | auto.include.jmx.reporter = true 23:16:54 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.254855445Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | auto.offset.reset = latest 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.255914552Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.058887ms 23:16:54 policy-pap | bootstrap.servers = [kafka:9092] 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:54 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.262973913Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:54 policy-pap | check.crcs = true 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.266537077Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.562184ms 23:16:54 policy-db-migrator | 23:16:54 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.269776602Z level=info msg="Executing migration" id="create temp_user v2" 23:16:54 policy-pap | client.id = consumer-policy-pap-2 23:16:54 policy-db-migrator | 23:16:54 kafka | log.message.timestamp.type = CreateTime 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.270545753Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=768.731µs 23:16:54 policy-pap | client.rack = 23:16:54 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:54 kafka | log.preallocate = false 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.273895825Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:54 policy-pap | connections.max.idle.ms = 540000 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | log.retention.bytes = -1 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.274637505Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=741.191µs 23:16:54 policy-pap | default.api.timeout.ms = 60000 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 kafka | log.retention.check.interval.ms = 300000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.280704254Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:54 policy-pap | enable.auto.commit = true 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | log.retention.hours = 168 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.281814493Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.10995ms 23:16:54 policy-pap | exclude.internal.topics = true 23:16:54 policy-db-migrator | 23:16:54 kafka | log.retention.minutes = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.285710905Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:54 policy-pap | fetch.max.bytes = 52428800 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.286427713Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=710.998µs 23:16:54 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:54 policy-pap | fetch.max.wait.ms = 500 23:16:54 kafka | log.retention.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.289735332Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | fetch.min.bytes = 1 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.290470602Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=734.89µs 23:16:54 policy-pap | group.id = policy-pap 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.293747629Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | group.instance.id = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.294420666Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=673.097µs 23:16:54 policy-pap | heartbeat.interval.ms = 3000 23:16:54 policy-db-migrator | 23:16:54 kafka | log.roll.hours = 168 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.30078729Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:54 policy-pap | interceptor.classes = [] 23:16:54 policy-db-migrator | 23:16:54 kafka | log.roll.jitter.hours = 0 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.301578153Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=788.643µs 23:16:54 policy-pap | internal.leave.group.on.close = true 23:16:54 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:54 kafka | log.roll.jitter.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.305221031Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:54 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.305801012Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=590.612µs 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:54 policy-pap | isolation.level = read_uncommitted 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.309407887Z level=info msg="Executing migration" id="create star table" 23:16:54 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.310304805Z level=info msg="Migration successfully executed" id="create star table" duration=896.108µs 23:16:54 policy-db-migrator | 23:16:54 policy-pap | max.partition.fetch.bytes = 1048576 23:16:54 policy-db-migrator | 23:16:54 policy-pap | max.poll.interval.ms = 300000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.316015124Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:54 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:54 policy-pap | max.poll.records = 500 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.316748624Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=733µs 23:16:54 policy-pap | metadata.max.age.ms = 300000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.320171019Z level=info msg="Executing migration" id="create org table v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | metric.reporters = [] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.320862437Z level=info msg="Migration successfully executed" id="create org table v1" duration=688.627µs 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:54 kafka | log.roll.ms = null 23:16:54 policy-pap | metrics.num.samples = 2 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.326271889Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | log.segment.bytes = 1073741824 23:16:54 policy-pap | metrics.recording.level = INFO 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.327413511Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.140802ms 23:16:54 kafka | log.segment.delete.delay.ms = 60000 23:16:54 policy-pap | metrics.sample.window.ms = 30000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.330829676Z level=info msg="Executing migration" id="create org_user table v1" 23:16:54 kafka | max.connection.creation.rate = 2147483647 23:16:54 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:54 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.33183467Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.001004ms 23:16:54 kafka | max.connections = 2147483647 23:16:54 policy-pap | receive.buffer.bytes = 65536 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.337796363Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:54 kafka | max.connections.per.ip = 2147483647 23:16:54 policy-pap | reconnect.backoff.max.ms = 1000 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.338960596Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.162923ms 23:16:54 kafka | max.connections.per.ip.overrides = 23:16:54 policy-pap | reconnect.backoff.ms = 50 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.342563921Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:54 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:54 policy-pap | request.timeout.ms = 30000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.343783217Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.218126ms 23:16:54 kafka | message.max.bytes = 1048588 23:16:54 policy-pap | retry.backoff.ms = 100 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.347450116Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:54 policy-pap | sasl.client.callback.handler.class = null 23:16:54 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:54 policy-pap | sasl.jaas.config = null 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:54 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.348711664Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.181623ms 23:16:54 policy-db-migrator | 23:16:54 kafka | metadata.log.dir = null 23:16:54 policy-pap | sasl.kerberos.service.name = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.352324229Z level=info msg="Executing migration" id="Update org table charset" 23:16:54 policy-db-migrator | 23:16:54 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:54 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.35235036Z level=info msg="Migration successfully executed" id="Update org table charset" duration=25.721µs 23:16:54 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:54 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:54 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.358394827Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | metadata.log.segment.bytes = 1073741824 23:16:54 policy-pap | sasl.login.callback.handler.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.35841969Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.853µs 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:54 kafka | metadata.log.segment.min.bytes = 8388608 23:16:54 policy-pap | sasl.login.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.3623075Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | metadata.log.segment.ms = 604800000 23:16:54 policy-pap | sasl.login.connect.timeout.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.362568974Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=262.225µs 23:16:54 policy-db-migrator | 23:16:54 kafka | metadata.max.idle.interval.ms = 500 23:16:54 policy-pap | sasl.login.read.timeout.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.366294625Z level=info msg="Executing migration" id="create dashboard table" 23:16:54 policy-db-migrator | 23:16:54 kafka | metadata.max.retention.bytes = 104857600 23:16:54 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.36730222Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.007005ms 23:16:54 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:54 kafka | metadata.max.retention.ms = 604800000 23:16:54 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.370890304Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.3721114Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.216475ms 23:16:54 kafka | metric.reporters = [] 23:16:54 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.377522493Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:54 kafka | metrics.num.samples = 2 23:16:54 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.378396771Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=873.738µs 23:16:54 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.382013286Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.mechanism = GSSAPI 23:16:54 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.382695923Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=684.927µs 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.386474998Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:54 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.387743406Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.268528ms 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.393954052Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 kafka | metrics.recording.level = INFO 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.394677701Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=723.149µs 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 kafka | metrics.sample.window.ms = 30000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.398549891Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:54 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 kafka | min.insync.replicas = 1 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.40795358Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.402698ms 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.439768932Z level=info msg="Executing migration" id="create dashboard v2" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.440963856Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.197904ms 23:16:54 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.44713808Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:54 policy-pap | security.protocol = PLAINTEXT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.448392709Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.253598ms 23:16:54 policy-db-migrator | 23:16:54 policy-pap | security.providers = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.451975002Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:54 kafka | node.id = 1 23:16:54 policy-pap | send.buffer.bytes = 131072 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.453127824Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.143732ms 23:16:54 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:54 kafka | num.io.threads = 8 23:16:54 policy-pap | session.timeout.ms = 45000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.45674027Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | num.network.threads = 3 23:16:54 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.457225066Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=484.726µs 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:54 kafka | num.partitions = 1 23:16:54 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.460756947Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | num.recovery.threads.per.data.dir = 1 23:16:54 policy-pap | ssl.cipher.suites = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.461792023Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.035606ms 23:16:54 policy-db-migrator | 23:16:54 kafka | num.replica.alter.log.dirs.threads = null 23:16:54 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.467031897Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.467105951Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=74.564µs 23:16:54 policy-pap | ssl.engine.factory.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.470518416Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:54 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:54 policy-pap | ssl.key.password = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.473403712Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.884626ms 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | num.replica.fetchers = 1 23:16:54 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.480251862Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:54 kafka | offset.metadata.max.bytes = 4096 23:16:54 policy-pap | ssl.keystore.certificate.chain = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.481950615Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.699913ms 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | offsets.commit.required.acks = -1 23:16:54 policy-pap | ssl.keystore.key = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.485848505Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.keystore.location = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.489166085Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=3.322869ms 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.keystore.password = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.492912248Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:54 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:54 kafka | offsets.commit.timeout.ms = 5000 23:16:54 policy-pap | ssl.keystore.type = JKS 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.493954704Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.048416ms 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | offsets.load.buffer.size = 5242880 23:16:54 policy-pap | ssl.protocol = TLSv1.3 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.499783869Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:54 kafka | offsets.retention.check.interval.ms = 600000 23:16:54 policy-pap | ssl.provider = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.501699483Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.912724ms 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | offsets.retention.minutes = 10080 23:16:54 policy-pap | ssl.secure.random.implementation = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.505057085Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:54 kafka | offsets.topic.compression.codec = 0 23:16:54 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.truststore.certificates = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.505979184Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=922.169µs 23:16:54 kafka | offsets.topic.num.partitions = 50 23:16:54 policy-pap | ssl.truststore.location = null 23:16:54 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.509256632Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:54 kafka | offsets.topic.replication.factor = 1 23:16:54 policy-pap | ssl.truststore.password = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.510090888Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=833.905µs 23:16:54 kafka | offsets.topic.segment.bytes = 104857600 23:16:54 policy-pap | ssl.truststore.type = JKS 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.515582645Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:54 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:54 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.515622617Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=33.441µs 23:16:54 kafka | password.encoder.iterations = 4096 23:16:54 policy-pap | 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.518885483Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:54 kafka | password.encoder.key.length = 128 23:16:54 policy-pap | [2024-02-27T23:14:52.347+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.518923515Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=39.182µs 23:16:54 kafka | password.encoder.keyfactory.algorithm = null 23:16:54 policy-pap | [2024-02-27T23:14:52.347+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.522477167Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:54 policy-pap | [2024-02-27T23:14:52.347+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075692347 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.525460009Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.982242ms 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | [2024-02-27T23:14:52.347+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.530686182Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.532783485Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.103914ms 23:16:54 policy-pap | [2024-02-27T23:14:52.662+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.538322325Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:54 policy-pap | [2024-02-27T23:14:52.799+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.541321398Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.001632ms 23:16:54 policy-pap | [2024-02-27T23:14:53.025+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@280c3dc0, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6f89ad03, org.springframework.security.web.context.SecurityContextHolderFilter@7bd7d71c, org.springframework.security.web.header.HeaderWriterFilter@7c6ab057, org.springframework.security.web.authentication.logout.LogoutFilter@6b6c0b7c, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3340ff7c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@ce0bbd5, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7c359808, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@f287a4e, org.springframework.security.web.access.ExceptionTranslationFilter@7c8f803d, org.springframework.security.web.access.intercept.AuthorizationFilter@55cb3b7] 23:16:54 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.545214508Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:54 policy-pap | [2024-02-27T23:14:53.829+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:54 kafka | password.encoder.old.secret = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.54728559Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.058021ms 23:16:54 policy-pap | [2024-02-27T23:14:53.942+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:54 kafka | password.encoder.secret = null 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.551801305Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:54 policy-pap | [2024-02-27T23:14:53.969+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:54 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.552018067Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=217.261µs 23:16:54 policy-pap | [2024-02-27T23:14:53.989+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:54 kafka | process.roles = [] 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.556939072Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:54 policy-pap | [2024-02-27T23:14:53.989+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:54 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.557821321Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=884.588µs 23:16:54 policy-pap | [2024-02-27T23:14:53.990+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:54 kafka | producer.id.expiration.ms = 86400000 23:16:54 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.562242469Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:54 policy-pap | [2024-02-27T23:14:53.990+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:54 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.563852587Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.615628ms 23:16:54 policy-pap | [2024-02-27T23:14:53.990+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:54 kafka | queued.max.request.bytes = -1 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.567831812Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:54 policy-pap | [2024-02-27T23:14:53.991+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:54 kafka | queued.max.requests = 500 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.567872584Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=50.553µs 23:16:54 policy-pap | [2024-02-27T23:14:53.991+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.573122988Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:54 policy-pap | [2024-02-27T23:14:53.995+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fd3c6b2f-e961-4dee-b92a-5df6cab870fa, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@10e4ce98 23:16:54 kafka | quota.window.num = 11 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.574558546Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.435148ms 23:16:54 policy-pap | [2024-02-27T23:14:54.006+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fd3c6b2f-e961-4dee-b92a-5df6cab870fa, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:54 kafka | quota.window.size.seconds = 1 23:16:54 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.578578513Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:54 policy-pap | [2024-02-27T23:14:54.006+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:54 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.579547326Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=970.942µs 23:16:54 policy-pap | allow.auto.create.topics = true 23:16:54 kafka | remote.log.manager.task.interval.ms = 30000 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.583299589Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:54 policy-pap | auto.commit.interval.ms = 5000 23:16:54 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.590481967Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.181128ms 23:16:54 policy-pap | auto.include.jmx.reporter = true 23:16:54 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.595773764Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:54 policy-pap | auto.offset.reset = latest 23:16:54 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.59644247Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=661.785µs 23:16:54 policy-pap | bootstrap.servers = [kafka:9092] 23:16:54 kafka | remote.log.manager.thread.pool.size = 10 23:16:54 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.599983582Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:54 policy-pap | check.crcs = true 23:16:54 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.600745823Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=762.251µs 23:16:54 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:54 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.604735519Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:54 policy-pap | client.id = consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3 23:16:54 kafka | remote.log.metadata.manager.class.path = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.605569985Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=833.815µs 23:16:54 policy-pap | client.rack = 23:16:54 policy-pap | connections.max.idle.ms = 540000 23:16:54 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.610533683Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | default.api.timeout.ms = 60000 23:16:54 policy-db-migrator | 23:16:54 policy-pap | enable.auto.commit = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.61084924Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=309.347µs 23:16:54 kafka | remote.log.metadata.manager.listener.name = null 23:16:54 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:54 policy-pap | exclude.internal.topics = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.614579692Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | fetch.max.bytes = 52428800 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.61547946Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=893.128µs 23:16:54 kafka | remote.log.reader.max.pending.tasks = 100 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:54 policy-pap | fetch.max.wait.ms = 500 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.619389652Z level=info msg="Executing migration" id="Add check_sum column" 23:16:54 kafka | remote.log.reader.threads = 10 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | fetch.min.bytes = 1 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.622726773Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.341271ms 23:16:54 kafka | remote.log.storage.manager.class.name = null 23:16:54 policy-db-migrator | 23:16:54 policy-pap | group.id = fd3c6b2f-e961-4dee-b92a-5df6cab870fa 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.626533689Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:54 kafka | remote.log.storage.manager.class.path = null 23:16:54 policy-db-migrator | 23:16:54 policy-pap | group.instance.id = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.627329031Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=794.783µs 23:16:54 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:54 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:54 policy-pap | heartbeat.interval.ms = 3000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.631879668Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:54 kafka | remote.log.storage.system.enable = false 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | interceptor.classes = [] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.632055527Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=175.179µs 23:16:54 kafka | replica.fetch.backoff.ms = 1000 23:16:54 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:54 policy-pap | internal.leave.group.on.close = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.636278046Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:54 kafka | replica.fetch.max.bytes = 1048576 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:54 policy-db-migrator | 23:16:54 policy-pap | isolation.level = read_uncommitted 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.636445275Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=165.819µs 23:16:54 kafka | replica.fetch.min.bytes = 1 23:16:54 policy-db-migrator | 23:16:54 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.640017618Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:54 kafka | replica.fetch.response.max.bytes = 10485760 23:16:54 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:54 policy-pap | max.partition.fetch.bytes = 1048576 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | max.poll.interval.ms = 300000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.6407924Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=774.652µs 23:16:54 kafka | replica.fetch.wait.max.ms = 500 23:16:54 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:54 policy-pap | max.poll.records = 500 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.645678915Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:54 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | metadata.max.age.ms = 300000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.647819401Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.134965ms 23:16:54 kafka | replica.lag.time.max.ms = 30000 23:16:54 policy-db-migrator | 23:16:54 policy-pap | metric.reporters = [] 23:16:54 policy-pap | metrics.num.samples = 2 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.651614626Z level=info msg="Executing migration" id="create data_source table" 23:16:54 kafka | replica.selector.class = null 23:16:54 policy-pap | metrics.recording.level = INFO 23:16:54 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:54 policy-pap | metrics.sample.window.ms = 30000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.653170941Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.555994ms 23:16:54 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:54 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:54 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.656971006Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:54 kafka | replica.socket.timeout.ms = 30000 23:16:54 policy-pap | receive.buffer.bytes = 65536 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.657713906Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=742.28µs 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.66240965Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:54 policy-pap | reconnect.backoff.max.ms = 1000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.663235215Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=824.675µs 23:16:54 kafka | replication.quota.window.num = 11 23:16:54 policy-pap | reconnect.backoff.ms = 50 23:16:54 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.667524827Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:54 kafka | replication.quota.window.size.seconds = 1 23:16:54 policy-pap | request.timeout.ms = 30000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.66869864Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.173693ms 23:16:54 kafka | request.timeout.ms = 30000 23:16:54 policy-pap | retry.backoff.ms = 100 23:16:54 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.672573361Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:54 policy-pap | sasl.client.callback.handler.class = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.673293269Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=720.078µs 23:16:54 kafka | reserved.broker.max.id = 1000 23:16:54 policy-pap | sasl.jaas.config = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.677815374Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:54 kafka | sasl.client.callback.handler.class = null 23:16:54 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.68569517Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.879076ms 23:16:54 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:54 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.689469565Z level=info msg="Executing migration" id="create data_source table v2" 23:16:54 kafka | sasl.jaas.config = null 23:16:54 policy-pap | sasl.kerberos.service.name = null 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.690262938Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=798.003µs 23:16:54 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.693776898Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:54 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 policy-pap | sasl.login.callback.handler.class = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.694637145Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=859.548µs 23:16:54 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:54 policy-pap | sasl.login.class = null 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.login.connect.timeout.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.699989054Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:54 policy-pap | sasl.login.read.timeout.ms = null 23:16:54 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.701258393Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.2749ms 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:54 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.70600735Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.706856146Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=848.766µs 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.712255388Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.715892815Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.633476ms 23:16:54 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:54 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.719830868Z level=info msg="Executing migration" id="Add secure json data column" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.mechanism = GSSAPI 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.722056388Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.22499ms 23:16:54 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.72578492Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:54 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.725811062Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.962µs 23:16:54 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | sasl.kerberos.service.name = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.730014149Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:54 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:54 policy-db-migrator | 23:16:54 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.730183418Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=169.339µs 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 policy-db-migrator | 23:16:54 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.734687172Z level=info msg="Executing migration" id="Add read_only data column" 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:54 kafka | sasl.login.callback.handler.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.737959069Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.270277ms 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | sasl.login.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.741979656Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.742252682Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=264.725µs 23:16:54 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.746040636Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:54 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.746206795Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=166.329µs 23:16:54 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-db-migrator | 23:16:54 kafka | sasl.login.connect.timeout.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.749979409Z level=info msg="Executing migration" id="Add uid column" 23:16:54 policy-pap | security.protocol = PLAINTEXT 23:16:54 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:54 kafka | sasl.login.read.timeout.ms = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.753052175Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.071296ms 23:16:54 policy-pap | security.providers = null 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.758286649Z level=info msg="Executing migration" id="Update uid value" 23:16:54 policy-pap | send.buffer.bytes = 131072 23:16:54 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.7586761Z level=info msg="Migration successfully executed" id="Update uid value" duration=388.901µs 23:16:54 policy-pap | session.timeout.ms = 45000 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | sasl.login.refresh.window.factor = 0.8 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.762694678Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:54 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:54 policy-db-migrator | 23:16:54 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.763519782Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=819.244µs 23:16:54 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.cipher.suites = null 23:16:54 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.767225482Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:54 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:54 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.768054688Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=829.296µs 23:16:54 kafka | sasl.login.retry.backoff.ms = 100 23:16:54 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:54 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.773201136Z level=info msg="Executing migration" id="create api_key table" 23:16:54 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:54 policy-pap | ssl.engine.factory.class = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.774301176Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.098549ms 23:16:54 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:54 policy-pap | ssl.key.password = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.778235158Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:54 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.779089635Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=853.677µs 23:16:54 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 policy-pap | ssl.keystore.certificate.chain = null 23:16:54 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.783181746Z level=info msg="Executing migration" id="add index api_key.key" 23:16:54 kafka | sasl.oauthbearer.expected.audience = null 23:16:54 policy-pap | ssl.keystore.key = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.78399838Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=816.234µs 23:16:54 kafka | sasl.oauthbearer.expected.issuer = null 23:16:54 policy-pap | ssl.keystore.location = null 23:16:54 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.788632131Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:54 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.789462396Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=829.675µs 23:16:54 policy-pap | ssl.keystore.password = null 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-pap | ssl.keystore.type = JKS 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.795211927Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:54 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 policy-pap | ssl.protocol = TLSv1.3 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.796447544Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.235197ms 23:16:54 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.800793229Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:54 policy-pap | ssl.provider = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.801932721Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.139292ms 23:16:54 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 policy-pap | ssl.secure.random.implementation = null 23:16:54 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.806778444Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:54 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:54 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.807496642Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=720.639µs 23:16:54 kafka | sasl.server.callback.handler.class = null 23:16:54 policy-db-migrator | 23:16:54 policy-pap | ssl.truststore.certificates = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.811859628Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:54 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:54 kafka | sasl.server.max.receive.size = 524288 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.822116263Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=10.257975ms 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | ssl.truststore.location = null 23:16:54 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.82705104Z level=info msg="Executing migration" id="create api_key table v2" 23:16:54 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 policy-pap | ssl.truststore.password = null 23:16:54 kafka | security.providers = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.827630322Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=578.831µs 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | ssl.truststore.type = JKS 23:16:54 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.831435277Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.832202659Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=767.532µs 23:16:54 policy-db-migrator | 23:16:54 policy-pap | 23:16:54 kafka | socket.connection.setup.timeout.ms = 10000 23:16:54 kafka | socket.listen.backlog.size = 50 23:16:54 policy-pap | [2024-02-27T23:14:54.013+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.836039447Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:54 kafka | socket.receive.buffer.bytes = 102400 23:16:54 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:54 policy-pap | [2024-02-27T23:14:54.013+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.837150796Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.108629ms 23:16:54 kafka | socket.request.max.bytes = 104857600 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | [2024-02-27T23:14:54.013+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075694013 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.842490846Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:54 kafka | socket.send.buffer.bytes = 102400 23:16:54 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:54 policy-pap | [2024-02-27T23:14:54.013+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Subscribed to topic(s): policy-pdp-pap 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.843792146Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.300891ms 23:16:54 kafka | ssl.cipher.suites = [] 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | [2024-02-27T23:14:54.013+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.847849196Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:54 kafka | ssl.client.auth = none 23:16:54 policy-db-migrator | 23:16:54 policy-pap | [2024-02-27T23:14:54.014+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=07f7b656-9624-4941-a88d-48e2bcdcb4e7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@38d308e7 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.848194944Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=343.709µs 23:16:54 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 policy-db-migrator | 23:16:54 policy-pap | [2024-02-27T23:14:54.014+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=07f7b656-9624-4941-a88d-48e2bcdcb4e7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.851601509Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:54 kafka | ssl.endpoint.identification.algorithm = https 23:16:54 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:54 policy-pap | [2024-02-27T23:14:54.014+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.852134798Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=533.359µs 23:16:54 kafka | ssl.engine.factory.class = null 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | allow.auto.create.topics = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.857049423Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:54 kafka | ssl.key.password = null 23:16:54 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:54 policy-pap | auto.commit.interval.ms = 5000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.857096017Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=42.393µs 23:16:54 kafka | ssl.keymanager.algorithm = SunX509 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | auto.include.jmx.reporter = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.861471903Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:54 kafka | ssl.keystore.certificate.chain = null 23:16:54 policy-db-migrator | 23:16:54 policy-pap | auto.offset.reset = latest 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.865716103Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.24849ms 23:16:54 kafka | ssl.keystore.key = null 23:16:54 policy-db-migrator | 23:16:54 policy-pap | bootstrap.servers = [kafka:9092] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.869878358Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:54 kafka | ssl.keystore.location = null 23:16:54 kafka | ssl.keystore.password = null 23:16:54 policy-pap | check.crcs = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.872638007Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.760349ms 23:16:54 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:54 kafka | ssl.keystore.type = JKS 23:16:54 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.877389675Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:54 policy-pap | client.id = consumer-policy-pap-4 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.877549363Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=159.828µs 23:16:54 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:54 kafka | ssl.protocol = TLSv1.3 23:16:54 policy-pap | client.rack = 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.881590102Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | ssl.provider = null 23:16:54 policy-pap | connections.max.idle.ms = 540000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.884070876Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.479955ms 23:16:54 policy-db-migrator | 23:16:54 kafka | ssl.secure.random.implementation = null 23:16:54 policy-pap | default.api.timeout.ms = 60000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.887795597Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:54 policy-db-migrator | 23:16:54 kafka | ssl.trustmanager.algorithm = PKIX 23:16:54 policy-pap | enable.auto.commit = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.890266261Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.470264ms 23:16:54 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:54 kafka | ssl.truststore.certificates = null 23:16:54 policy-pap | exclude.internal.topics = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.89429464Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | ssl.truststore.location = null 23:16:54 policy-pap | fetch.max.bytes = 52428800 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.895014068Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=719.169µs 23:16:54 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:54 kafka | ssl.truststore.password = null 23:16:54 policy-pap | fetch.max.wait.ms = 500 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.899729714Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | ssl.truststore.type = JKS 23:16:54 policy-pap | fetch.min.bytes = 1 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.900268593Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=538.589µs 23:16:54 policy-db-migrator | 23:16:54 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:54 policy-pap | group.id = policy-pap 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.904120161Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:54 policy-db-migrator | 23:16:54 kafka | transaction.max.timeout.ms = 900000 23:16:54 policy-pap | group.instance.id = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.905292885Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.172944ms 23:16:54 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:54 kafka | transaction.partition.verification.enable = true 23:16:54 policy-pap | heartbeat.interval.ms = 3000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.911025155Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:54 policy-pap | interceptor.classes = [] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.912281553Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.253457ms 23:16:54 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:54 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:54 policy-pap | internal.leave.group.on.close = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.916569935Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | transaction.state.log.min.isr = 2 23:16:54 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.91778676Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.215825ms 23:16:54 policy-db-migrator | 23:16:54 kafka | transaction.state.log.num.partitions = 50 23:16:54 policy-pap | isolation.level = read_uncommitted 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.921944356Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:54 policy-db-migrator | 23:16:54 kafka | transaction.state.log.replication.factor = 3 23:16:54 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.922795342Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=850.476µs 23:16:54 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:54 kafka | transaction.state.log.segment.bytes = 104857600 23:16:54 policy-pap | max.partition.fetch.bytes = 1048576 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.926893883Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | max.poll.interval.ms = 300000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.926958457Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=65.993µs 23:16:54 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:54 kafka | transactional.id.expiration.ms = 604800000 23:16:54 policy-pap | max.poll.records = 500 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.933193615Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | unclean.leader.election.enable = false 23:16:54 policy-pap | metadata.max.age.ms = 300000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.933234717Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=42.572µs 23:16:54 policy-db-migrator | 23:16:54 kafka | unstable.api.versions.enable = false 23:16:54 policy-pap | metric.reporters = [] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.937789993Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | zookeeper.clientCnxnSocket = null 23:16:54 policy-pap | metrics.num.samples = 2 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.942133198Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.343345ms 23:16:54 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:54 kafka | zookeeper.connect = zookeeper:2181 23:16:54 policy-pap | metrics.recording.level = INFO 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.9460536Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | zookeeper.connection.timeout.ms = null 23:16:54 policy-pap | metrics.sample.window.ms = 30000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.948772097Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.717797ms 23:16:54 policy-db-migrator | 23:16:54 kafka | zookeeper.max.in.flight.requests = 10 23:16:54 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.953456261Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:54 policy-db-migrator | 23:16:54 kafka | zookeeper.metadata.migration.enable = false 23:16:54 policy-pap | receive.buffer.bytes = 65536 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.953524625Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=69.054µs 23:16:54 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:54 kafka | zookeeper.session.timeout.ms = 18000 23:16:54 policy-pap | reconnect.backoff.max.ms = 1000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.959198422Z level=info msg="Executing migration" id="create quota table v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | zookeeper.set.acl = false 23:16:54 policy-pap | reconnect.backoff.ms = 50 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.959927391Z level=info msg="Migration successfully executed" id="create quota table v1" duration=729.919µs 23:16:54 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:54 kafka | zookeeper.ssl.cipher.suites = null 23:16:54 policy-pap | request.timeout.ms = 30000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.963690295Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:54 kafka | zookeeper.ssl.client.enable = false 23:16:54 policy-pap | retry.backoff.ms = 100 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.964536331Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=845.106µs 23:16:54 kafka | zookeeper.ssl.crl.enable = false 23:16:54 policy-pap | sasl.client.callback.handler.class = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.96822838Z level=info msg="Executing migration" id="Update quota table charset" 23:16:54 kafka | zookeeper.ssl.enabled.protocols = null 23:16:54 policy-pap | sasl.jaas.config = null 23:16:54 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.968255922Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=28.312µs 23:16:54 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:54 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.97302016Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:54 kafka | zookeeper.ssl.keystore.location = null 23:16:54 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.974128979Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.108179ms 23:16:54 kafka | zookeeper.ssl.keystore.password = null 23:16:54 policy-pap | sasl.kerberos.service.name = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.977862872Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:54 kafka | zookeeper.ssl.keystore.type = null 23:16:54 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 kafka | zookeeper.ssl.ocsp.enable = false 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.979240126Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.376414ms 23:16:54 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.98392923Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:54 policy-pap | sasl.login.callback.handler.class = null 23:16:54 kafka | zookeeper.ssl.truststore.location = null 23:16:54 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.988029522Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.100562ms 23:16:54 policy-pap | sasl.login.class = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.992748267Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:54 kafka | zookeeper.ssl.truststore.password = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.992771899Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=24.322µs 23:16:54 kafka | zookeeper.ssl.truststore.type = null 23:16:54 policy-pap | sasl.login.connect.timeout.ms = null 23:16:54 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.996769435Z level=info msg="Executing migration" id="create session table" 23:16:54 kafka | (kafka.server.KafkaConfig) 23:16:54 policy-pap | sasl.login.read.timeout.ms = null 23:16:54 policy-db-migrator | JOIN pdpstatistics b 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:14.998029643Z level=info msg="Migration successfully executed" id="create session table" duration=1.260039ms 23:16:54 kafka | [2024-02-27 23:14:22,088] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:54 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:54 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.001755455Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:54 kafka | [2024-02-27 23:14:22,088] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:54 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:54 policy-db-migrator | SET a.id = b.id 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.001891102Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=136.137µs 23:16:54 kafka | [2024-02-27 23:14:22,089] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:54 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.009976929Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:54 kafka | [2024-02-27 23:14:22,094] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:54 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.010066453Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=91.964µs 23:16:54 kafka | [2024-02-27 23:14:22,127] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:54 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.013779168Z level=info msg="Executing migration" id="create playlist table v2" 23:16:54 kafka | [2024-02-27 23:14:22,134] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:54 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:54 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.014470886Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=694.847µs 23:16:54 kafka | [2024-02-27 23:14:22,143] INFO Loaded 0 logs in 15ms (kafka.log.LogManager) 23:16:54 policy-pap | sasl.mechanism = GSSAPI 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.018068124Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:54 kafka | [2024-02-27 23:14:22,144] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:54 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.018797122Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=728.988µs 23:16:54 kafka | [2024-02-27 23:14:22,145] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:54 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.022618534Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:54 kafka | [2024-02-27 23:14:22,157] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.022658636Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=41.182µs 23:16:54 kafka | [2024-02-27 23:14:22,210] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:54 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.027381834Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:54 kafka | [2024-02-27 23:14:22,255] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.027426076Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=45.543µs 23:16:54 kafka | [2024-02-27 23:14:22,269] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.031352693Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:54 kafka | [2024-02-27 23:14:22,296] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.036030708Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.679975ms 23:16:54 kafka | [2024-02-27 23:14:22,601] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.04005689Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:54 kafka | [2024-02-27 23:14:22,619] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:54 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.043026086Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.968926ms 23:16:54 kafka | [2024-02-27 23:14:22,620] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:54 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.048252331Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:54 kafka | [2024-02-27 23:14:22,625] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:54 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.048330425Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=78.724µs 23:16:54 kafka | [2024-02-27 23:14:22,629] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:54 policy-pap | security.protocol = PLAINTEXT 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.051833769Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:54 kafka | [2024-02-27 23:14:22,652] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | security.providers = null 23:16:54 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.051910653Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=77.654µs 23:16:54 kafka | [2024-02-27 23:14:22,653] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | send.buffer.bytes = 131072 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.055909493Z level=info msg="Executing migration" id="create preferences table v3" 23:16:54 kafka | [2024-02-27 23:14:22,655] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | session.timeout.ms = 45000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.056586349Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=677.006µs 23:16:54 kafka | [2024-02-27 23:14:22,656] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.061936011Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:54 kafka | [2024-02-27 23:14:22,659] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:54 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.061978783Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=44.072µs 23:16:54 kafka | [2024-02-27 23:14:22,670] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:54 policy-pap | ssl.cipher.suites = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.067687753Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:54 kafka | [2024-02-27 23:14:22,671] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:54 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.072709516Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.028903ms 23:16:54 kafka | [2024-02-27 23:14:22,698] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:54 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.076415342Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:54 kafka | [2024-02-27 23:14:22,724] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1709075662712,1709075662712,1,0,0,72057610112401409,258,0,27 23:16:54 policy-pap | ssl.engine.factory.class = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.076596531Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=180.119µs 23:16:54 kafka | (kafka.zk.KafkaZkClient) 23:16:54 policy-pap | ssl.key.password = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.080126927Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:54 kafka | [2024-02-27 23:14:22,724] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:54 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:54 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.083369737Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.242711ms 23:16:54 kafka | [2024-02-27 23:14:22,778] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:54 policy-pap | ssl.keystore.certificate.chain = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.08817912Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:54 kafka | [2024-02-27 23:14:22,784] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | ssl.keystore.key = null 23:16:54 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.0912307Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.05084ms 23:16:54 kafka | [2024-02-27 23:14:22,791] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | ssl.keystore.location = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.094872751Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:54 kafka | [2024-02-27 23:14:22,793] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | ssl.keystore.password = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.094952786Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=79.914µs 23:16:54 kafka | [2024-02-27 23:14:22,798] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:54 policy-pap | ssl.keystore.type = JKS 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.098423989Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:54 kafka | [2024-02-27 23:14:22,811] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:54 policy-pap | ssl.protocol = TLSv1.3 23:16:54 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.09940225Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=977.761µs 23:16:54 kafka | [2024-02-27 23:14:22,818] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:54 policy-pap | ssl.provider = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.104124888Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:54 kafka | [2024-02-27 23:14:22,818] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:54 policy-pap | ssl.secure.random.implementation = null 23:16:54 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.105588795Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.462377ms 23:16:54 kafka | [2024-02-27 23:14:22,822] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:54 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.110525605Z level=info msg="Executing migration" id="create alert table v1" 23:16:54 kafka | [2024-02-27 23:14:22,827] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:22,844] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:54 policy-pap | ssl.truststore.certificates = null 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:22,846] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.112080487Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.558613ms 23:16:54 policy-pap | ssl.truststore.location = null 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:22,846] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.150243402Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:54 policy-pap | ssl.truststore.password = null 23:16:54 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:54 kafka | [2024-02-27 23:14:22,849] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.151939672Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.69753ms 23:16:54 policy-pap | ssl.truststore.type = JKS 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:22,849] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.159576093Z level=info msg="Executing migration" id="add index alert state" 23:16:54 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:54 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:54 kafka | [2024-02-27 23:14:22,855] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.160714743Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.145691ms 23:16:54 policy-pap | 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:22,858] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.165123814Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:54 policy-pap | [2024-02-27T23:14:54.019+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:22,860] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.166219762Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.092538ms 23:16:54 policy-pap | [2024-02-27T23:14:54.019+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 kafka | [2024-02-27 23:14:22,876] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.171716131Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:54 policy-pap | [2024-02-27T23:14:54.019+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075694019 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:22,881] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.172507313Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=790.842µs 23:16:54 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:54 kafka | [2024-02-27 23:14:22,882] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:54 policy-pap | [2024-02-27T23:14:54.019+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.176730435Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:22,887] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:54 policy-pap | [2024-02-27T23:14:54.019+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.17759261Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=858.945µs 23:16:54 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:54 kafka | [2024-02-27 23:14:22,894] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.181544058Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:54 policy-pap | [2024-02-27T23:14:54.019+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=07f7b656-9624-4941-a88d-48e2bcdcb4e7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:22,896] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.182422524Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=884.527µs 23:16:54 policy-pap | [2024-02-27T23:14:54.019+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=fd3c6b2f-e961-4dee-b92a-5df6cab870fa, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:22,896] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.186768822Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:54 policy-pap | [2024-02-27T23:14:54.019+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bdff849d-228e-461f-880e-f613327978ca, alive=false, publisher=null]]: starting 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.201286975Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.516243ms 23:16:54 kafka | [2024-02-27 23:14:22,897] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:54 policy-pap | [2024-02-27T23:14:54.038+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:54 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.205614263Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:54 kafka | [2024-02-27 23:14:22,897] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:54 policy-pap | acks = -1 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.206275638Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=661.135µs 23:16:54 kafka | [2024-02-27 23:14:22,900] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:54 policy-pap | auto.include.jmx.reporter = true 23:16:54 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.210178954Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:54 kafka | [2024-02-27 23:14:22,900] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:54 policy-pap | batch.size = 16384 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.211091551Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=912.327µs 23:16:54 kafka | [2024-02-27 23:14:22,901] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:54 policy-pap | bootstrap.servers = [kafka:9092] 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.216298335Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:54 kafka | [2024-02-27 23:14:22,901] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:54 policy-pap | buffer.memory = 33554432 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.218922013Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=2.623428ms 23:16:54 kafka | [2024-02-27 23:14:22,902] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:54 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:54 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.226509621Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:54 kafka | [2024-02-27 23:14:22,907] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:54 policy-pap | client.id = producer-1 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.228014061Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.50404ms 23:16:54 kafka | [2024-02-27 23:14:22,906] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:54 policy-pap | compression.type = none 23:16:54 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.23408422Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:54 kafka | [2024-02-27 23:14:22,915] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:54 policy-pap | connections.max.idle.ms = 540000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.235914906Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.827576ms 23:16:54 kafka | [2024-02-27 23:14:22,916] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:54 policy-pap | delivery.timeout.ms = 120000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.242421148Z level=info msg="Executing migration" id="Add column is_default" 23:16:54 kafka | [2024-02-27 23:14:22,919] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:54 policy-pap | enable.idempotence = true 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.248089056Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.672098ms 23:16:54 kafka | [2024-02-27 23:14:22,920] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:54 policy-pap | interceptor.classes = [] 23:16:54 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.252035603Z level=info msg="Executing migration" id="Add column frequency" 23:16:54 kafka | [2024-02-27 23:14:22,920] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:54 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.255558139Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.522056ms 23:16:54 kafka | [2024-02-27 23:14:22,920] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:54 policy-pap | linger.ms = 0 23:16:54 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.259512376Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:54 kafka | [2024-02-27 23:14:22,921] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:54 policy-pap | max.block.ms = 60000 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.263023741Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.510425ms 23:16:54 kafka | [2024-02-27 23:14:22,923] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:54 policy-pap | max.in.flight.requests.per.connection = 5 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.266758118Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:54 kafka | [2024-02-27 23:14:22,924] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:54 policy-pap | max.request.size = 1048576 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.270252761Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.494083ms 23:16:54 kafka | [2024-02-27 23:14:22,924] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:54 policy-pap | metadata.max.age.ms = 300000 23:16:54 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.28069517Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:54 kafka | [2024-02-27 23:14:22,926] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 23:16:54 policy-pap | metadata.max.idle.ms = 300000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.282573838Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.877958ms 23:16:54 kafka | [2024-02-27 23:14:22,929] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | metric.reporters = [] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.287770342Z level=info msg="Executing migration" id="Update alert table charset" 23:16:54 kafka | [2024-02-27 23:14:22,930] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.7:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 23:16:54 policy-db-migrator | 23:16:54 policy-pap | metrics.num.samples = 2 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.287811944Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=42.782µs 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | metrics.recording.level = INFO 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.29171254Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:54 kafka | [2024-02-27 23:14:22,930] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:54 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:54 policy-pap | metrics.sample.window.ms = 30000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.291741811Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=30.022µs 23:16:54 kafka | [2024-02-27 23:14:22,934] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.297056391Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:54 kafka | [2024-02-27 23:14:22,935] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:54 policy-db-migrator | 23:16:54 policy-pap | partitioner.availability.timeout.ms = 0 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.29818992Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.13319ms 23:16:54 kafka | [2024-02-27 23:14:22,935] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:54 policy-db-migrator | 23:16:54 policy-pap | partitioner.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.305112623Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:54 kafka | [2024-02-27 23:14:22,936] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 23:16:54 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:54 policy-pap | partitioner.ignore.keys = false 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.306541279Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.433876ms 23:16:54 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | receive.buffer.bytes = 32768 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.311878379Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:54 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 23:16:54 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:54 policy-pap | reconnect.backoff.max.ms = 1000 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.313129755Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.250896ms 23:16:54 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.323480369Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | reconnect.backoff.ms = 50 23:16:54 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.324203097Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=723.018µs 23:16:54 policy-db-migrator | 23:16:54 policy-pap | request.timeout.ms = 30000 23:16:54 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.329849514Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | retries = 2147483647 23:16:54 kafka | [2024-02-27 23:14:22,937] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.330807385Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=957.421µs 23:16:54 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:54 policy-pap | retry.backoff.ms = 100 23:16:54 kafka | [2024-02-27 23:14:22,941] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.33529364Z level=info msg="Executing migration" id="Add for to alert table" 23:16:54 policy-db-migrator | -------------- 23:16:54 policy-pap | sasl.client.callback.handler.class = null 23:16:54 kafka | [2024-02-27 23:14:22,945] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.338965884Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.668703ms 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.jaas.config = null 23:16:54 kafka | [2024-02-27 23:14:22,945] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.343621038Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:54 policy-db-migrator | 23:16:54 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 kafka | [2024-02-27 23:14:22,945] INFO Kafka startTimeMs: 1709075662936 (org.apache.kafka.common.utils.AppInfoParser) 23:16:54 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.347270049Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.648771ms 23:16:54 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 kafka | [2024-02-27 23:14:22,947] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.350998176Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:54 policy-pap | sasl.kerberos.service.name = null 23:16:54 kafka | [2024-02-27 23:14:22,959] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:54 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.351183766Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=186.13µs 23:16:54 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 kafka | [2024-02-27 23:14:23,045] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.354716951Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:54 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 kafka | [2024-02-27 23:14:23,134] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.355554465Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=837.644µs 23:16:54 policy-pap | sasl.login.callback.handler.class = null 23:16:54 kafka | [2024-02-27 23:14:23,136] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.362196475Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:54 policy-pap | sasl.login.class = null 23:16:54 kafka | [2024-02-27 23:14:23,209] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:54 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.363004017Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=807.562µs 23:16:54 policy-pap | sasl.login.connect.timeout.ms = null 23:16:54 kafka | [2024-02-27 23:14:27,961] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.366757394Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:54 policy-pap | sasl.login.read.timeout.ms = null 23:16:54 kafka | [2024-02-27 23:14:27,962] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.372496765Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.738591ms 23:16:54 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:54,507] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.378293171Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:54 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:54 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:54 kafka | [2024-02-27 23:14:54,512] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.378365575Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=64.643µs 23:16:54 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,511] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.3830514Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:54 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:54 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:54 kafka | [2024-02-27 23:14:54,517] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.383883895Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=832.215µs 23:16:54 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,548] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(Z3t4fWueQ-mCuVhNX6-71A),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(RvhKTaXTQ2ueiwbitViXeA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.388791182Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:54 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:54,550] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.38968122Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=889.818µs 23:16:54 policy-pap | sasl.mechanism = GSSAPI 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:54,552] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.393420756Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:54 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.39350853Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=88.344µs 23:16:54 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.398079381Z level=info msg="Executing migration" id="create annotation table v5" 23:16:54 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:54 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.398863392Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=784.031µs 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.403088524Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.403990292Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=901.578µs 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 policy-db-migrator | msg 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.407607882Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-db-migrator | upgrade to 1100 completed 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.408443505Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=835.934µs 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.415993362Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:54 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.417244238Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.251715ms 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.420301849Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | security.protocol = PLAINTEXT 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.421251589Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=949.32µs 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | security.providers = null 23:16:54 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.426937898Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | send.buffer.bytes = 131072 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.42794439Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.000532ms 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.431757961Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.431797984Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=35.442µs 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.cipher.suites = null 23:16:54 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.434882276Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.438933038Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.045852ms 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:54 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.444628727Z level=info msg="Executing migration" id="Drop category_id index" 23:16:54 kafka | [2024-02-27 23:14:54,553] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.engine.factory.class = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.445577738Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=950.921µs 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.key.password = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.449331925Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.455729271Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.401706ms 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.keystore.certificate.chain = null 23:16:54 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.459206074Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.keystore.key = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.459953164Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=750.56µs 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.keystore.location = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.465986271Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.keystore.password = null 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.466975252Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=988.812µs 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.keystore.type = JKS 23:16:54 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.470463695Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.protocol = TLSv1.3 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.471132771Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=669.126µs 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.provider = null 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.474792433Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.secure.random.implementation = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.490012323Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.21443ms 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.498379873Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.truststore.certificates = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.499743985Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.369573ms 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.truststore.location = null 23:16:54 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.504469224Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.truststore.password = null 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.50534864Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=875.195µs 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | ssl.truststore.type = JKS 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.509832826Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:54 kafka | [2024-02-27 23:14:54,554] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | transaction.timeout.ms = 60000 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.51011182Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=279.114µs 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-pap | transactional.id = null 23:16:54 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.515140994Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:54 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.515719415Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=577.771µs 23:16:54 policy-pap | 23:16:54 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.518653439Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:54 policy-pap | [2024-02-27T23:14:54.049+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.518826778Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=174.919µs 23:16:54 policy-pap | [2024-02-27T23:14:54.064+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.522145043Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:54 policy-pap | [2024-02-27T23:14:54.064+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.526176284Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.030761ms 23:16:54 policy-pap | [2024-02-27T23:14:54.064+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075694064 23:16:54 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.532093905Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:54 policy-pap | [2024-02-27T23:14:54.064+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bdff849d-228e-461f-880e-f613327978ca, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.536054544Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.955429ms 23:16:54 policy-pap | [2024-02-27T23:14:54.064+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0ff262d0-5912-43f9-907e-656cd03ef19d, alive=false, publisher=null]]: starting 23:16:54 policy-db-migrator | 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.538976567Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:54 policy-pap | [2024-02-27T23:14:54.065+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:54 policy-db-migrator | -------------- 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.539848003Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=871.106µs 23:16:54 policy-pap | acks = -1 23:16:54 policy-db-migrator | TRUNCATE TABLE sequence 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.542779427Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:54 policy-pap | auto.include.jmx.reporter = true 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.543673435Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=899.758µs 23:16:54 policy-pap | batch.size = 16384 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.548747891Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:54 policy-pap | bootstrap.servers = [kafka:9092] 23:16:54 kafka | [2024-02-27 23:14:54,555] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.548969482Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=221.651µs 23:16:54 policy-pap | buffer.memory = 33554432 23:16:54 kafka | [2024-02-27 23:14:54,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.551850315Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:54 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:54 kafka | [2024-02-27 23:14:54,556] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.558191148Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.340144ms 23:16:54 policy-pap | client.id = producer-2 23:16:54 kafka | [2024-02-27 23:14:54,556] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:54 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.561432778Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:54 policy-pap | compression.type = none 23:16:54 kafka | [2024-02-27 23:14:54,561] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.56204789Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=615.502µs 23:16:54 policy-pap | connections.max.idle.ms = 540000 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.564677828Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:54 policy-pap | delivery.timeout.ms = 120000 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.564793574Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=118.036µs 23:16:54 policy-pap | enable.idempotence = true 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | DROP TABLE pdpstatistics 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.569501412Z level=info msg="Executing migration" id="Move region to single row" 23:16:54 policy-pap | interceptor.classes = [] 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.570094123Z level=info msg="Migration successfully executed" id="Move region to single row" duration=594.901µs 23:16:54 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.573483831Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:54 policy-pap | linger.ms = 0 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.574733727Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.249336ms 23:16:54 policy-pap | max.block.ms = 60000 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.577809029Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:54 policy-pap | max.in.flight.requests.per.connection = 5 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.578653313Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=831.153µs 23:16:54 policy-pap | max.request.size = 1048576 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.583311568Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:54 policy-pap | metadata.max.age.ms = 300000 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.584202985Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=891.017µs 23:16:54 policy-pap | metadata.max.idle.ms = 300000 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.587584012Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:54 policy-pap | metric.reporters = [] 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.588462329Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=877.907µs 23:16:54 policy-pap | metrics.num.samples = 2 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.591620525Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:54 policy-pap | metrics.recording.level = INFO 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.592466059Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=845.534µs 23:16:54 policy-pap | metrics.sample.window.ms = 30000 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | DROP TABLE statistics_sequence 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.597084262Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:54 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | -------------- 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.597915866Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=831.084µs 23:16:54 policy-pap | partitioner.availability.timeout.ms = 0 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.60084757Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:54 policy-pap | partitioner.class = null 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.600910673Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=64.193µs 23:16:54 policy-pap | partitioner.ignore.keys = false 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-db-migrator | name version 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.603770793Z level=info msg="Executing migration" id="create test_data table" 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | receive.buffer.bytes = 32768 23:16:54 policy-db-migrator | policyadmin 1300 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.604537963Z level=info msg="Migration successfully executed" id="create test_data table" duration=766.99µs 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | reconnect.backoff.max.ms = 1000 23:16:54 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.60903205Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | reconnect.backoff.ms = 50 23:16:54 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.609762048Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=728.778µs 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | request.timeout.ms = 30000 23:16:54 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.615472198Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | retries = 2147483647 23:16:54 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.616341255Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=868.787µs 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | retry.backoff.ms = 100 23:16:54 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.619755824Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:54 kafka | [2024-02-27 23:14:54,562] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.client.callback.handler.class = null 23:16:54 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.621177358Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.416914ms 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.jaas.config = null 23:16:54 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.626016173Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:54 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.626326469Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=310.266µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:54 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.629486465Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.kerberos.service.name = null 23:16:54 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.629844114Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=357.709µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:54 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.632218129Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:54 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:24 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.632279292Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=61.793µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.callback.handler.class = null 23:16:54 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.635302861Z level=info msg="Executing migration" id="create team table" 23:16:54 policy-pap | sasl.login.class = null 23:16:54 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.635985547Z level=info msg="Migration successfully executed" id="create team table" duration=682.426µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.connect.timeout.ms = null 23:16:54 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.640066341Z level=info msg="Executing migration" id="add index team.org_id" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.read.timeout.ms = null 23:16:54 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.641012921Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=946.56µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:54 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.644303775Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:54 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.645183171Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=878.906µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:54 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.649761711Z level=info msg="Executing migration" id="Add column uid in team" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:54 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.654007094Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.247274ms 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.657253975Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:54 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.657419684Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=165.898µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.mechanism = GSSAPI 23:16:54 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.659610339Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:54 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.660521076Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=910.207µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:54 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.664722848Z level=info msg="Executing migration" id="create team member table" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:54 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.665385122Z level=info msg="Migration successfully executed" id="create team member table" duration=662.134µs 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:54 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.671213109Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:54 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.672546889Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.33241ms 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:54 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.676221962Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:54 kafka | [2024-02-27 23:14:54,563] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:54 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.677598844Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.376212ms 23:16:54 kafka | [2024-02-27 23:14:54,563] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:54 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.681058767Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:54 kafka | [2024-02-27 23:14:54,707] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:54 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.68189492Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=835.733µs 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:54 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.686805339Z level=info msg="Executing migration" id="Add column email to team table" 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | security.protocol = PLAINTEXT 23:16:54 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.691304655Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.495145ms 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | security.providers = null 23:16:54 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.694798639Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | send.buffer.bytes = 131072 23:16:54 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.699218441Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.419542ms 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:54 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:25 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.702522494Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:54 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.706905025Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.382031ms 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | ssl.cipher.suites = null 23:16:54 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.711894828Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:54 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.71269817Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=803.022µs 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.716084377Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:54 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | ssl.engine.factory.class = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.71746045Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.375733ms 23:16:54 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 kafka | [2024-02-27 23:14:54,708] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | ssl.key.password = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.721124913Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:54 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.722645782Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.51981ms 23:16:54 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | ssl.keystore.certificate.chain = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.727313448Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | ssl.keystore.key = null 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.728252107Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=938.629µs 23:16:54 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.keystore.location = null 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.733292222Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:54 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.keystore.password = null 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.734637453Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.344681ms 23:16:54 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.keystore.type = JKS 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.738337667Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:54 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.protocol = TLSv1.3 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.739574953Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.237686ms 23:16:54 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.provider = null 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.743821596Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:54 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.secure.random.implementation = null 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.744704852Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=888.687µs 23:16:54 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:54 kafka | [2024-02-27 23:14:54,709] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.747813825Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:54 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.truststore.certificates = null 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.748684082Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=865.186µs 23:16:54 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.truststore.location = null 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.752119882Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:54 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.truststore.password = null 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.752565545Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=445.643µs 23:16:54 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | ssl.truststore.type = JKS 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.757245592Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:54 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | transaction.timeout.ms = 60000 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.757454862Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=209.591µs 23:16:54 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | transactional.id = null 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.759798906Z level=info msg="Executing migration" id="create tag table" 23:16:54 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.760415638Z level=info msg="Migration successfully executed" id="create tag table" duration=616.182µs 23:16:54 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.765113105Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:54 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | [2024-02-27T23:14:54.065+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.766516979Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.403393ms 23:16:54 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | [2024-02-27T23:14:54.069+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:54 kafka | [2024-02-27 23:14:54,710] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.769930188Z level=info msg="Executing migration" id="create login attempt table" 23:16:54 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | [2024-02-27T23:14:54.069+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.770953112Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.022474ms 23:16:54 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:26 23:16:54 policy-pap | [2024-02-27T23:14:54.069+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709075694069 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.77510601Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:54 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 policy-pap | [2024-02-27T23:14:54.069+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0ff262d0-5912-43f9-907e-656cd03ef19d, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.776020319Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=907.218µs 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.7811969Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:54 policy-pap | [2024-02-27T23:14:54.069+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:54 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.782445536Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.247316ms 23:16:54 policy-pap | [2024-02-27T23:14:54.069+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.785840004Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:54 policy-pap | [2024-02-27T23:14:54.071+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.805990734Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=20.151879ms 23:16:54 policy-pap | [2024-02-27T23:14:54.071+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.809074606Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:54 policy-pap | [2024-02-27T23:14:54.073+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:54 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.809592324Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=513.548µs 23:16:54 policy-pap | [2024-02-27T23:14:54.074+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:54 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.814839529Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:54 policy-pap | [2024-02-27T23:14:54.077+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:54 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.815766988Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=925.23µs 23:16:54 policy-pap | [2024-02-27T23:14:54.077+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:54 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,711] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.819750737Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:54 policy-pap | [2024-02-27T23:14:54.077+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:54 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,712] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.82018791Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=437.223µs 23:16:54 policy-pap | [2024-02-27T23:14:54.078+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:54 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,712] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.823478103Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:54 policy-pap | [2024-02-27T23:14:54.078+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:54 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,712] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.824091966Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=615.874µs 23:16:54 policy-pap | [2024-02-27T23:14:54.080+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.376 seconds (process running for 11.0) 23:16:54 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,712] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.829434167Z level=info msg="Executing migration" id="create user auth table" 23:16:54 policy-pap | [2024-02-27T23:14:54.510+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:54 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,712] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.830103451Z level=info msg="Migration successfully executed" id="create user auth table" duration=668.795µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.833281988Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:54 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,712] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.510+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 1E2F8WPXTiubxm6qH6MBlQ 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.834173815Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=891.207µs 23:16:54 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,712] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.510+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 1E2F8WPXTiubxm6qH6MBlQ 23:16:54 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,712] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.8371304Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:54 policy-pap | [2024-02-27T23:14:54.515+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 1E2F8WPXTiubxm6qH6MBlQ 23:16:54 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.837192524Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=62.644µs 23:16:54 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.536+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.84206451Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:54 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.536+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Cluster ID: 1E2F8WPXTiubxm6qH6MBlQ 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.847025271Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.960251ms 23:16:54 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:27 23:16:54 kafka | [2024-02-27 23:14:54,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.617+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.928738706Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:54 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.626+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.937072574Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.337138ms 23:16:54 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.633+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.940784859Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:54 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.680+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.944676414Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.891395ms 23:16:54 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.740+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.950083989Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:54 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.791+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.955200807Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.116748ms 23:16:54 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.848+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.958585806Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:54 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.902+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.959774528Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.188632ms 23:16:54 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:54.953+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.963124674Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:54 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:55.008+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.969612545Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.487471ms 23:16:54 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2702242314240800u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:55.058+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.975841683Z level=info msg="Executing migration" id="create server_lock table" 23:16:54 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:28 23:16:54 policy-pap | [2024-02-27T23:14:55.118+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.976545009Z level=info msg="Migration successfully executed" id="create server_lock table" duration=703.196µs 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:54 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:28 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.98017713Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:54 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:55.163+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.981860829Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.681009ms 23:16:54 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:55.224+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.985730432Z level=info msg="Executing migration" id="create user auth token table" 23:16:54 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:55.268+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.987244472Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.51597ms 23:16:54 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:55.328+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.993137742Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:54 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:28 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:55.373+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.994499634Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.361162ms 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:15.999502456Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:54 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.439+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.00147121Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.972694ms 23:16:54 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.445+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] (Re-)joining group 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.005021819Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:54 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.480+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.006232715Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.210566ms 23:16:54 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.482+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.011815007Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:54 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.482+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Request joining group due to: need to re-join with the given member-id: consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3-e4378968-77ad-4b63-8cfc-d4149cd89a93 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.020538928Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.725381ms 23:16:54 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2702242314240900u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.483+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.024121322Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:54 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.483+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] (Re-)joining group 23:16:54 kafka | [2024-02-27 23:14:54,732] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.025066912Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=931.79µs 23:16:54 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.490+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-e7b834ec-6ce7-4813-ad09-40e41dbc774c 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.029029017Z level=info msg="Executing migration" id="create cache_data table" 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:54 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.491+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.030057092Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.028345ms 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:54 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:55.491+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.035276094Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:54 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:58.510+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-e7b834ec-6ce7-4813-ad09-40e41dbc774c', protocol='range'} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.036433256Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.155312ms 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:54 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 policy-pap | [2024-02-27T23:14:58.512+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Successfully joined group with generation Generation{generationId=1, memberId='consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3-e4378968-77ad-4b63-8cfc-d4149cd89a93', protocol='range'} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.041096798Z level=info msg="Executing migration" id="create short_url table v1" 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:14:58.518+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-e7b834ec-6ce7-4813-ad09-40e41dbc774c=Assignment(partitions=[policy-pdp-pap-0])} 23:16:54 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.042501844Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.404556ms 23:16:54 policy-pap | [2024-02-27T23:14:58.518+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Finished assignment for group at generation 1: {consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3-e4378968-77ad-4b63-8cfc-d4149cd89a93=Assignment(partitions=[policy-pdp-pap-0])} 23:16:54 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.046049706Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:54 policy-pap | [2024-02-27T23:14:58.547+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Successfully synced group in generation Generation{generationId=1, memberId='consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3-e4378968-77ad-4b63-8cfc-d4149cd89a93', protocol='range'} 23:16:54 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2702242314241000u 1 2024-02-27 23:14:29 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.047406399Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.356252ms 23:16:54 policy-pap | [2024-02-27T23:14:58.547+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-e7b834ec-6ce7-4813-ad09-40e41dbc774c', protocol='range'} 23:16:54 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2702242314241100u 1 2024-02-27 23:14:29 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.053675328Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:54 policy-pap | [2024-02-27T23:14:58.548+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:54 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2702242314241200u 1 2024-02-27 23:14:29 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.053846617Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=171.409µs 23:16:54 policy-pap | [2024-02-27T23:14:58.548+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:54 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2702242314241200u 1 2024-02-27 23:14:29 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.057364747Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:54 policy-pap | [2024-02-27T23:14:58.555+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Adding newly assigned partitions: policy-pdp-pap-0 23:16:54 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2702242314241200u 1 2024-02-27 23:14:29 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.057568208Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=203.491µs 23:16:54 policy-pap | [2024-02-27T23:14:58.555+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:54 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2702242314241200u 1 2024-02-27 23:14:30 23:16:54 kafka | [2024-02-27 23:14:54,733] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.061216404Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:54 policy-pap | [2024-02-27T23:14:58.576+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:54 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2702242314241300u 1 2024-02-27 23:14:30 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.063857737Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=2.639903ms 23:16:54 policy-pap | [2024-02-27T23:14:58.577+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Found no committed offset for partition policy-pdp-pap-0 23:16:54 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2702242314241300u 1 2024-02-27 23:14:30 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.070925169Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:54 policy-pap | [2024-02-27T23:14:58.594+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3, groupId=fd3c6b2f-e961-4dee-b92a-5df6cab870fa] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:54 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2702242314241300u 1 2024-02-27 23:14:30 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.071919882Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.037165ms 23:16:54 policy-pap | [2024-02-27T23:14:58.594+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:54 policy-db-migrator | policyadmin: OK @ 1300 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.075872587Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:54 policy-pap | [2024-02-27T23:15:02.962+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.077114873Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.242017ms 23:16:54 policy-pap | [2024-02-27T23:15:02.962+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.082093352Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:02.963+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 23:16:54 policy-pap | [2024-02-27T23:15:15.786+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.082277532Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=183.94µs 23:16:54 policy-pap | [] 23:16:54 kafka | [2024-02-27 23:14:54,734] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.08687832Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.786+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,739] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.088689138Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.800717ms 23:16:54 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"03fe34c6-820b-42c9-831a-589ef163ea8f","timestampMs":1709075715750,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 kafka | [2024-02-27 23:14:54,742] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.09503543Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.786+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:54 kafka | [2024-02-27 23:14:54,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.09705489Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=2.01951ms 23:16:54 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"03fe34c6-820b-42c9-831a-589ef163ea8f","timestampMs":1709075715750,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 kafka | [2024-02-27 23:14:54,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.101290819Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.795+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:54 kafka | [2024-02-27 23:14:54,743] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.102942248Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.651218ms 23:16:54 policy-pap | [2024-02-27T23:15:15.867+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate starting 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.108593253Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.868+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate starting listener 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.10963498Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.041617ms 23:16:54 policy-pap | [2024-02-27T23:15:15.868+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate starting timer 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.113206492Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:54 policy-pap | [2024-02-27T23:15:15.869+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=6eb49d91-6114-4186-abb4-512213842060, expireMs=1709075745869] 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.121733183Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=8.52745ms 23:16:54 policy-pap | [2024-02-27T23:15:15.870+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate starting enqueue 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.125154197Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:54 policy-pap | [2024-02-27T23:15:15.870+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=6eb49d91-6114-4186-abb4-512213842060, expireMs=1709075745869] 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.126430817Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.276739ms 23:16:54 policy-pap | [2024-02-27T23:15:15.871+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate started 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.132372917Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:54 policy-pap | [2024-02-27T23:15:15.872+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.132510425Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=133.237µs 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"6eb49d91-6114-4186-abb4-512213842060","timestampMs":1709075715853,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.135693606Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:54 policy-pap | [2024-02-27T23:15:15.907+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.136386664Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=691.027µs 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"6eb49d91-6114-4186-abb4-512213842060","timestampMs":1709075715853,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.139987218Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.910+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.141641217Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.642649ms 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"6eb49d91-6114-4186-abb4-512213842060","timestampMs":1709075715853,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.147926407Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.910+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.149260779Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.334103ms 23:16:54 policy-pap | [2024-02-27T23:15:15.910+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:54 kafka | [2024-02-27 23:14:54,744] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.153254654Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:54 policy-pap | [2024-02-27T23:15:15.922+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.153348079Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=93.835µs 23:16:54 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c73643ec-a1e5-42d7-a6f8-df8401235102","timestampMs":1709075715914,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.156604745Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:54 policy-pap | [2024-02-27T23:15:15.922+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.157504614Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=899.479µs 23:16:54 policy-pap | [2024-02-27T23:15:15.929+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.162680524Z level=info msg="Executing migration" id="create alert_instance table" 23:16:54 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6eb49d91-6114-4186-abb4-512213842060","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"99736685-5d25-4f97-a366-4a5391bf9535","timestampMs":1709075715915,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.164055838Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.377614ms 23:16:54 policy-pap | [2024-02-27T23:15:15.929+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopping 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.168465696Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.929+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopping enqueue 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.170051671Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.585065ms 23:16:54 policy-pap | [2024-02-27T23:15:15.929+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopping timer 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.176072156Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.930+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6eb49d91-6114-4186-abb4-512213842060, expireMs=1709075745869] 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.177850213Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.775667ms 23:16:54 policy-pap | [2024-02-27T23:15:15.930+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopping listener 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.18150767Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:54 policy-pap | [2024-02-27T23:15:15.930+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopped 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.192074701Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=10.564151ms 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:15.933+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.203704429Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c73643ec-a1e5-42d7-a6f8-df8401235102","timestampMs":1709075715914,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup"} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.205363469Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.66044ms 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate successful 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.208631315Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 start publishing next request 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.209708363Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.080618ms 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange starting 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.212654592Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:54 kafka | [2024-02-27 23:14:54,745] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange starting listener 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.249701523Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=37.045351ms 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange starting timer 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.255663875Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=8c2bf324-be4c-4c28-974f-549c594c5dc4, expireMs=1709075745935] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.292509484Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=36.8658ms 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange starting enqueue 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.295551478Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange started 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.296407714Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=855.356µs 23:16:54 policy-pap | [2024-02-27T23:15:15.935+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=8c2bf324-be4c-4c28-974f-549c594c5dc4, expireMs=1709075745935] 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.300190829Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:54 policy-pap | [2024-02-27T23:15:15.936+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.301303599Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.11152ms 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2bf324-be4c-4c28-974f-549c594c5dc4","timestampMs":1709075715854,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.308417884Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:54 policy-pap | [2024-02-27T23:15:15.985+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.31427927Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.860237ms 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2bf324-be4c-4c28-974f-549c594c5dc4","timestampMs":1709075715854,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.318129998Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:54 policy-pap | [2024-02-27T23:15:15.985+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.323974413Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.843696ms 23:16:54 policy-pap | [2024-02-27T23:15:15.987+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.327740796Z level=info msg="Executing migration" id="create alert_rule table" 23:16:54 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"8c2bf324-be4c-4c28-974f-549c594c5dc4","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"42f81e5d-a936-45f9-9d7c-20ab91668a9d","timestampMs":1709075715949,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.32871916Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=978.143µs 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange stopping 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.336601065Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange stopping enqueue 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.338835155Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=2.23326ms 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange stopping timer 23:16:54 kafka | [2024-02-27 23:14:54,746] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.343027772Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=8c2bf324-be4c-4c28-974f-549c594c5dc4, expireMs=1709075745935] 23:16:54 kafka | [2024-02-27 23:14:54,747] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.344119031Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.091009ms 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange stopping listener 23:16:54 kafka | [2024-02-27 23:14:54,747] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.347577988Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange stopped 23:16:54 kafka | [2024-02-27 23:14:54,747] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.348484177Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=906.049µs 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpStateChange successful 23:16:54 kafka | [2024-02-27 23:14:54,750] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.354427258Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 start publishing next request 23:16:54 kafka | [2024-02-27 23:14:54,751] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.354778726Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=346.809µs 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate starting 23:16:54 kafka | [2024-02-27 23:14:54,751] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.359352924Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate starting listener 23:16:54 kafka | [2024-02-27 23:14:54,751] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.369005435Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.65345ms 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate starting timer 23:16:54 kafka | [2024-02-27 23:14:54,751] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.37263902Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:54 policy-pap | [2024-02-27T23:15:15.999+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=e52ab644-6231-4e4d-bad3-1f9b282d83a5, expireMs=1709075745999] 23:16:54 kafka | [2024-02-27 23:14:54,751] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.378927031Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.287731ms 23:16:54 policy-pap | [2024-02-27T23:15:16.000+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate starting enqueue 23:16:54 kafka | [2024-02-27 23:14:54,751] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.385507996Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:54 policy-pap | [2024-02-27T23:15:16.000+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.392663523Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.155696ms 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e52ab644-6231-4e4d-bad3-1f9b282d83a5","timestampMs":1709075715974,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.396651638Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:54 policy-pap | [2024-02-27T23:15:16.001+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate started 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.397735796Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.084539ms 23:16:54 policy-pap | [2024-02-27T23:15:16.002+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.400965211Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:54 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6eb49d91-6114-4186-abb4-512213842060","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"99736685-5d25-4f97-a366-4a5391bf9535","timestampMs":1709075715915,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.402183886Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.218045ms 23:16:54 policy-pap | [2024-02-27T23:15:16.003+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6eb49d91-6114-4186-abb4-512213842060 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.408727669Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:54 policy-pap | [2024-02-27T23:15:16.006+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.419067628Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=10.334388ms 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"8c2bf324-be4c-4c28-974f-549c594c5dc4","timestampMs":1709075715854,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.423322528Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.006+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:54 policy-pap | [2024-02-27T23:15:16.006+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.427988709Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.666101ms 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"8c2bf324-be4c-4c28-974f-549c594c5dc4","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"42f81e5d-a936-45f9-9d7c-20ab91668a9d","timestampMs":1709075715949,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.431364272Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.432574317Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.218196ms 23:16:54 policy-pap | [2024-02-27T23:15:16.007+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8c2bf324-be4c-4c28-974f-549c594c5dc4 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.010+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e52ab644-6231-4e4d-bad3-1f9b282d83a5","timestampMs":1709075715974,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.439413116Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.010+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.449442198Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=10.029112ms 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.012+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.453600462Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:54 kafka | [2024-02-27 23:14:54,752] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | {"source":"pap-f98d2e00-7ac1-4183-a084-4e9dbf0dfb89","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"e52ab644-6231-4e4d-bad3-1f9b282d83a5","timestampMs":1709075715974,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.459909624Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.308242ms 23:16:54 policy-pap | [2024-02-27T23:15:16.013+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.463795473Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:54 policy-pap | [2024-02-27T23:15:16.018+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.463971823Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=181.8µs 23:16:54 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e52ab644-6231-4e4d-bad3-1f9b282d83a5","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f206ef84-db1f-46b7-8de2-6ff01d93655f","timestampMs":1709075716009,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.019+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id e52ab644-6231-4e4d-bad3-1f9b282d83a5 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.467959949Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.020+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.469035926Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.075728ms 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"e52ab644-6231-4e4d-bad3-1f9b282d83a5","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"f206ef84-db1f-46b7-8de2-6ff01d93655f","timestampMs":1709075716009,"name":"apex-96c0945f-1271-4075-8707-21652b619ca8","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.475359428Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.021+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopping 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.477415399Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.056052ms 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.021+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopping enqueue 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.481307359Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.021+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopping timer 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.482499814Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.189835ms 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.021+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=e52ab644-6231-4e4d-bad3-1f9b282d83a5, expireMs=1709075745999] 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.489398556Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.021+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopping listener 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.489594256Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=196.061µs 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.021+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate stopped 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.493310367Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.025+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 PdpUpdate successful 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.500055231Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.744394ms 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:16.025+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-96c0945f-1271-4075-8707-21652b619ca8 has no more requests 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.503590862Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:54 policy-pap | [2024-02-27T23:15:23.531+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.509955016Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.363834ms 23:16:54 policy-pap | [2024-02-27T23:15:23.538+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.516766024Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:54 policy-pap | [2024-02-27T23:15:23.944+00:00|INFO|SessionData|http-nio-6969-exec-5] unknown group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,753] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.523406082Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.641059ms 23:16:54 policy-pap | [2024-02-27T23:15:24.437+00:00|INFO|SessionData|http-nio-6969-exec-5] create cached group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.5274355Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:54 policy-pap | [2024-02-27T23:15:24.438+00:00|INFO|SessionData|http-nio-6969-exec-5] creating DB group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.533897969Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.461798ms 23:16:54 policy-pap | [2024-02-27T23:15:24.947+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.537941057Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:54 policy-pap | [2024-02-27T23:15:25.198+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.544532683Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.591166ms 23:16:54 policy-pap | [2024-02-27T23:15:25.297+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.548084995Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:54 policy-pap | [2024-02-27T23:15:25.297+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.548260104Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=182.97µs 23:16:54 policy-pap | [2024-02-27T23:15:25.297+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.553859716Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:54 policy-pap | [2024-02-27T23:15:25.312+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-27T23:15:25Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-27T23:15:25Z, user=policyadmin)] 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.554621187Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=765.901µs 23:16:54 policy-pap | [2024-02-27T23:15:26.011+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.558341089Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:54 policy-pap | [2024-02-27T23:15:26.013+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.564892802Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.551223ms 23:16:54 policy-pap | [2024-02-27T23:15:26.013+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.568447584Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:54 policy-pap | [2024-02-27T23:15:26.013+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.568625553Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=176.659µs 23:16:54 policy-pap | [2024-02-27T23:15:26.014+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.574759045Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:54 policy-pap | [2024-02-27T23:15:26.025+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-27T23:15:26Z, user=policyadmin)] 23:16:54 kafka | [2024-02-27 23:14:54,754] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.583622184Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.863069ms 23:16:54 policy-pap | [2024-02-27T23:15:26.366+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.587344145Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:54 policy-pap | [2024-02-27T23:15:26.366+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.588638615Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.293351ms 23:16:54 policy-pap | [2024-02-27T23:15:26.366+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:26.366+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.592541255Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:26.366+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.599619048Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.077173ms 23:16:54 policy-pap | [2024-02-27T23:15:26.366+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.60596125Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:54 policy-pap | [2024-02-27T23:15:26.381+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-27T23:15:26Z, user=policyadmin)] 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.606917161Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=956.031µs 23:16:54 policy-pap | [2024-02-27T23:15:45.870+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=6eb49d91-6114-4186-abb4-512213842060, expireMs=1709075745869] 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.61059684Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:54 policy-pap | [2024-02-27T23:15:45.936+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=8c2bf324-be4c-4c28-974f-549c594c5dc4, expireMs=1709075745935] 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.611834967Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.237637ms 23:16:54 policy-pap | [2024-02-27T23:15:47.009+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:54 policy-pap | [2024-02-27T23:15:47.011+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.615116924Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.622883534Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.76519ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.704159323Z level=info msg="Executing migration" id="create provenance_type table" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.705893276Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.736803ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.712349725Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.714288599Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.938724ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.719842819Z level=info msg="Executing migration" id="create alert_image table" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.722347174Z level=info msg="Migration successfully executed" id="create alert_image table" duration=2.507415ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.72632457Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.727874163Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.543513ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.73300657Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.733246903Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=240.223µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.737170255Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.738318137Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.147472ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.742679073Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.74449112Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.812007ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.749810597Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.750365617Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.75411048Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.754952395Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=846.155µs 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.758870816Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.760457423Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.588586ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.766579863Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.774400915Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.822292ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.778595421Z level=info msg="Executing migration" id="create library_element table v1" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.779811908Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.216287ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.78522966Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.787072519Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.842699ms 23:16:54 kafka | [2024-02-27 23:14:54,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.791195562Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.792570446Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.375364ms 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.796645077Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.798304376Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.66358ms 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.802435319Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.804428537Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.992448ms 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.80985378Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.810157947Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=303.806µs 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.814232946Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.814584725Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=351.139µs 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.818791573Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.81948769Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=687.137µs 23:16:54 kafka | [2024-02-27 23:14:54,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.824282389Z level=info msg="Executing migration" id="create data_keys table" 23:16:54 kafka | [2024-02-27 23:14:54,787] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.825426451Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.143911ms 23:16:54 kafka | [2024-02-27 23:14:54,787] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.829242366Z level=info msg="Executing migration" id="create secrets table" 23:16:54 kafka | [2024-02-27 23:14:54,831] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.830198999Z level=info msg="Migration successfully executed" id="create secrets table" duration=956.192µs 23:16:54 kafka | [2024-02-27 23:14:54,841] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.833895288Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:54 kafka | [2024-02-27 23:14:54,843] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.880681515Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=46.780316ms 23:16:54 kafka | [2024-02-27 23:14:54,844] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.88708332Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:54 kafka | [2024-02-27 23:14:54,845] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.892915095Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.830534ms 23:16:54 kafka | [2024-02-27 23:14:54,860] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.898041362Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:54 kafka | [2024-02-27 23:14:54,861] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.898642634Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=602.932µs 23:16:54 kafka | [2024-02-27 23:14:54,861] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.902433839Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:54 kafka | [2024-02-27 23:14:54,861] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.950565758Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=48.132399ms 23:16:54 kafka | [2024-02-27 23:14:54,861] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:16.955835113Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:54 kafka | [2024-02-27 23:14:54,870] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.005836309Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=49.998945ms 23:16:54 kafka | [2024-02-27 23:14:54,871] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.011071212Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:54 kafka | [2024-02-27 23:14:54,871] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.011764519Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=691.257µs 23:16:54 kafka | [2024-02-27 23:14:54,871] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.016846045Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:54 kafka | [2024-02-27 23:14:54,871] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.018543054Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.69644ms 23:16:54 kafka | [2024-02-27 23:14:54,882] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.022518291Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:54 kafka | [2024-02-27 23:14:54,882] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.022955105Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=431.074µs 23:16:54 kafka | [2024-02-27 23:14:54,883] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.027974427Z level=info msg="Executing migration" id="create permission table" 23:16:54 kafka | [2024-02-27 23:14:54,883] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.028919267Z level=info msg="Migration successfully executed" id="create permission table" duration=944.34µs 23:16:54 kafka | [2024-02-27 23:14:54,883] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.035922243Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.037617462Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.695219ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.041483634Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.04254722Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.063396ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.046551399Z level=info msg="Executing migration" id="create role table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.047401174Z level=info msg="Migration successfully executed" id="create role table" duration=848.364µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.052514271Z level=info msg="Executing migration" id="add column display_name" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.059687267Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.172586ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.063520127Z level=info msg="Executing migration" id="add column group_name" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.07081708Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.296662ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.074382266Z level=info msg="Executing migration" id="add index role.org_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.075176027Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=796.211µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.079451731Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.080310826Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=858.725µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.084005369Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.085692698Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.686709ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.089581461Z level=info msg="Executing migration" id="create team role table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.090349932Z level=info msg="Migration successfully executed" id="create team role table" duration=768.001µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.094993314Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.096185927Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.189393ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.100538404Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.101755899Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.217074ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.106978602Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.108725443Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.751121ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.112656759Z level=info msg="Executing migration" id="create user role table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.113989658Z level=info msg="Migration successfully executed" id="create user role table" duration=1.331929ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.117775166Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.118871224Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.095998ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.122414609Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.123467865Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.054426ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.12797203Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.129072038Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.099788ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.13447757Z level=info msg="Executing migration" id="create builtin role table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.135840092Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.361942ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.139679323Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.141090777Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.407114ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.145902909Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.14707472Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.171571ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.150463757Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.158613014Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.148846ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.163461848Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.164566915Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.104148ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.172855659Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.174556728Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.700699ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.181163864Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.182283283Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.116269ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.185773615Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.186781818Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.007912ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.192084135Z level=info msg="Executing migration" id="create seed assignment table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.193615126Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.53438ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.197381542Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.200684205Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=3.303733ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.204252062Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.214843816Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=10.579084ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.220407207Z level=info msg="Executing migration" id="permission kind migration" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.228650429Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.238711ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.232260197Z level=info msg="Executing migration" id="permission attribute migration" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.243246652Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=10.993605ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.251883664Z level=info msg="Executing migration" id="permission identifier migration" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.260376648Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.491614ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.263758146Z level=info msg="Executing migration" id="add permission identifier index" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.265035623Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.277836ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.27052687Z level=info msg="Executing migration" id="create query_history table v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.271548233Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.021163ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.274773793Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.276167265Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.392602ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.283609615Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.283847597Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=237.842µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.293292351Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.293415658Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=123.126µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.296047705Z level=info msg="Executing migration" id="teams permissions migration" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.296679128Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=652.204µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.301148972Z level=info msg="Executing migration" id="dashboard permissions" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.303092164Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.942262ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.308774872Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.317619064Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=8.842872ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.325700197Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.326621305Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=926.728µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.335085908Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.33570118Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=619.542µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.340324063Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.341216219Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=892.647µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.345991569Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.347775003Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.782674ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.352594664Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.360524939Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=7.929405ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.364449035Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.364523319Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=75.564µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.369740011Z level=info msg="Executing migration" id="create correlation table v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.37065064Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=910.669µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.373991444Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.375134984Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.1436ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.380781729Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.382712771Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.931461ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.422927615Z level=info msg="Executing migration" id="add correlation config column" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.433804144Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.872629ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.439932154Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.441109587Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.181733ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.445171769Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.447116021Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.952693ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.45416193Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.482656601Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=28.48204ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.487353346Z level=info msg="Executing migration" id="create correlation v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.488237393Z level=info msg="Migration successfully executed" id="create correlation v2" duration=884.977µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.492799601Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.494054377Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.255196ms 23:16:54 kafka | [2024-02-27 23:14:54,894] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,894] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,894] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,894] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,894] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,905] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,905] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,906] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,906] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,906] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,912] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,913] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,913] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,913] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,913] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,921] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,922] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,922] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,922] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,922] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,931] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,932] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,932] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,932] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,932] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,939] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,940] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,940] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,940] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,940] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,948] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,948] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,948] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,948] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,949] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,956] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,957] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,957] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,957] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,957] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,964] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,964] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,965] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,965] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,965] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,974] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,974] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,974] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,974] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,974] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,980] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,980] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,980] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,980] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,981] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:54,991] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:54,992] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:54,993] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,994] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:54,994] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,002] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,003] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,003] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,003] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,003] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,016] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,017] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,017] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,017] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.497597862Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.498840918Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.243066ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.507766605Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.50979164Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.025955ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.514820344Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.515233095Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=412.312µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.518681396Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.519500459Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=818.803µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.524437907Z level=info msg="Executing migration" id="add provisioning column" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.533903122Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.468945ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.538218509Z level=info msg="Executing migration" id="create entity_events table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.538802599Z level=info msg="Migration successfully executed" id="create entity_events table" duration=584.18µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.542246419Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.543178117Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=931.928µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.549058345Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.549537221Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.554337451Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.555026828Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.558617186Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.559873841Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.256425ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.563759995Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.564954457Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.194172ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.571372953Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.572472711Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.099638ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.575465897Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.576603757Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.137659ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.580115531Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.581320024Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.203813ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.587694267Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.588741782Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.045375ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.591700507Z level=info msg="Executing migration" id="Drop public config table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.592493328Z level=info msg="Migration successfully executed" id="Drop public config table" duration=792.801µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.599766079Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.60074102Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=974.481µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.611718985Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.614400955Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.681491ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.620702905Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.622750562Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.047338ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.628845341Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.629958439Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.112517ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.634939949Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.669271846Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=34.331587ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.673766641Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.679953335Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.186484ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.687056857Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.694234552Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.176015ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.700624387Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.700802927Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=178.439µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.705176865Z level=info msg="Executing migration" id="add share column" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.711047092Z level=info msg="Migration successfully executed" id="add share column" duration=5.869977ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.718228788Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.718403687Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=173.429µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.721756733Z level=info msg="Executing migration" id="create file table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.722366524Z level=info msg="Migration successfully executed" id="create file table" duration=609.402µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.727520144Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.728331866Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=811.502µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.732650413Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.733416013Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=765.409µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.739423057Z level=info msg="Executing migration" id="create file_meta table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.74062523Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.202083ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.748477431Z level=info msg="Executing migration" id="file table idx: path key" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.750447614Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.969392ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.757417739Z level=info msg="Executing migration" id="set path collation in file table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.757499483Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=81.824µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.763840615Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.76394052Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=101.045µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.77138389Z level=info msg="Executing migration" id="managed permissions migration" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.772321119Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=937.798µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.803896641Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.804273341Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=376.741µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.810703857Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.812044557Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.34078ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.817378186Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.825926133Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.547117ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.830855932Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.831044121Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=186.669µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.861363178Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.86349127Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.126872ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.86999404Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.87057331Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=584.921µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.91068421Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.911131763Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=445.894µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.916873873Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.917656154Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=782.321µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.924234318Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.93555016Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.316912ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.941907333Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.950512194Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.60426ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.954672371Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.955829462Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.156901ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:17.960522467Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.072143739Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=111.625103ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.07830824Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.079451682Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.143442ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.083082926Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.084302851Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.217915ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.088427473Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.122648958Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=34.221165ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.161200387Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.161662291Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=463.675µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.169324632Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:54 kafka | [2024-02-27 23:14:55,017] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,025] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,026] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,026] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,026] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,026] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,034] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,034] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,034] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,034] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,035] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,050] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,051] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,052] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,052] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,052] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,064] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,064] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,064] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,065] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,065] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,073] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,074] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,074] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,074] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,074] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,083] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,083] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,083] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,083] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,083] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,093] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,094] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,094] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,094] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.169705873Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=381.741µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.174635757Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.175013638Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=377.75µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.185523482Z level=info msg="Executing migration" id="create folder table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.186421199Z level=info msg="Migration successfully executed" id="create folder table" duration=897.318µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.192853304Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.194349155Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.495631ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.20301491Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.205225218Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.209979ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.211963669Z level=info msg="Executing migration" id="Update folder title length" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.212018912Z level=info msg="Migration successfully executed" id="Update folder title length" duration=59.693µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.219811491Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.221291479Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.481609ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.22632353Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.227501403Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.177434ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.234704659Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.235870852Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.165983ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.241274952Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.241744257Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=469.125µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.249503573Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.24981089Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=307.967µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.258271574Z level=info msg="Executing migration" id="create anon_device table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.259167722Z level=info msg="Migration successfully executed" id="create anon_device table" duration=896.257µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.317473819Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.319420413Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.946504ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.336034124Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.338673806Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.638832ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.350093318Z level=info msg="Executing migration" id="create signing_key table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.350923974Z level=info msg="Migration successfully executed" id="create signing_key table" duration=836.406µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.355479907Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.357425662Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.945145ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.365502456Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.366804925Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.3024ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.374554241Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.374989394Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=436.563µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.383930563Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.396042224Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.112441ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.406667284Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.407982594Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.321801ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.414044889Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.415425964Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.377764ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.42132285Z level=info msg="Executing migration" id="create sso_setting table" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.422426199Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.103499ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.431877516Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.433083871Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.209184ms 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.440476927Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.44108502Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=610.023µs 23:16:54 grafana | logger=migrator t=2024-02-27T23:14:18.449632198Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.352225065s 23:16:54 grafana | logger=sqlstore t=2024-02-27T23:14:18.461639532Z level=info msg="Created default admin" user=admin 23:16:54 grafana | logger=sqlstore t=2024-02-27T23:14:18.462110687Z level=info msg="Created default organization" 23:16:54 grafana | logger=secrets t=2024-02-27T23:14:18.471234857Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:54 grafana | logger=plugin.store t=2024-02-27T23:14:18.488348105Z level=info msg="Loading plugins..." 23:16:54 kafka | [2024-02-27 23:14:55,094] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,107] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,108] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,108] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,108] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,108] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,121] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,122] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,122] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,122] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,122] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,133] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,134] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,134] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,134] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,134] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,141] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,142] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,142] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,142] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,142] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,149] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,150] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,150] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,150] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,150] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,175] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,176] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,176] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,176] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,176] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,185] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,185] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,185] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:54 grafana | logger=local.finder t=2024-02-27T23:14:18.541544939Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:54 grafana | logger=plugin.store t=2024-02-27T23:14:18.541615742Z level=info msg="Plugins loaded" count=55 duration=53.268167ms 23:16:54 grafana | logger=query_data t=2024-02-27T23:14:18.544021972Z level=info msg="Query Service initialization" 23:16:54 grafana | logger=live.push_http t=2024-02-27T23:14:18.547616054Z level=info msg="Live Push Gateway initialization" 23:16:54 grafana | logger=ngalert.migration t=2024-02-27T23:14:18.553561233Z level=info msg=Starting 23:16:54 grafana | logger=ngalert.migration orgID=1 t=2024-02-27T23:14:18.554405448Z level=info msg="Migrating alerts for organisation" 23:16:54 grafana | logger=ngalert.migration orgID=1 t=2024-02-27T23:14:18.555136808Z level=info msg="Alerts found to migrate" alerts=0 23:16:54 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-27T23:14:18.556996957Z level=info msg="Completed legacy migration" 23:16:54 grafana | logger=infra.usagestats.collector t=2024-02-27T23:14:18.595737665Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:54 grafana | logger=provisioning.datasources t=2024-02-27T23:14:18.598154725Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:54 grafana | logger=provisioning.alerting t=2024-02-27T23:14:18.61223172Z level=info msg="starting to provision alerting" 23:16:54 grafana | logger=provisioning.alerting t=2024-02-27T23:14:18.612248921Z level=info msg="finished to provision alerting" 23:16:54 grafana | logger=grafanaStorageLogger t=2024-02-27T23:14:18.612741507Z level=info msg="Storage starting" 23:16:54 grafana | logger=http.server t=2024-02-27T23:14:18.616272656Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:54 grafana | logger=ngalert.state.manager t=2024-02-27T23:14:18.61634414Z level=info msg="Warming state cache for startup" 23:16:54 grafana | logger=ngalert.multiorg.alertmanager t=2024-02-27T23:14:18.617898184Z level=info msg="Starting MultiOrg Alertmanager" 23:16:54 grafana | logger=grafana-apiserver t=2024-02-27T23:14:18.62340461Z level=info msg="Authentication is disabled" 23:16:54 grafana | logger=grafana-apiserver t=2024-02-27T23:14:18.630181633Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:54 grafana | logger=plugins.update.checker t=2024-02-27T23:14:18.710044707Z level=info msg="Update check succeeded" duration=96.277505ms 23:16:54 grafana | logger=sqlstore.transactions t=2024-02-27T23:14:18.715821087Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:54 grafana | logger=sqlstore.transactions t=2024-02-27T23:14:18.727702664Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:54 grafana | logger=ngalert.state.manager t=2024-02-27T23:14:18.738120123Z level=info msg="State cache has been initialized" states=0 duration=121.774553ms 23:16:54 grafana | logger=ngalert.scheduler t=2024-02-27T23:14:18.738215478Z level=info msg="Starting scheduler" tickInterval=10s 23:16:54 grafana | logger=ticker t=2024-02-27T23:14:18.738284392Z level=info msg=starting first_tick=2024-02-27T23:14:20Z 23:16:54 grafana | logger=sqlstore.transactions t=2024-02-27T23:14:18.741201298Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:54 grafana | logger=grafana.update.checker t=2024-02-27T23:14:18.758898877Z level=info msg="Update check succeeded" duration=146.458116ms 23:16:54 grafana | logger=sqlstore.transactions t=2024-02-27T23:14:18.811545441Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:54 grafana | logger=infra.usagestats t=2024-02-27T23:16:07.624272908Z level=info msg="Usage stats are ready to report" 23:16:54 kafka | [2024-02-27 23:14:55,185] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,185] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,194] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,194] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,194] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,195] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,195] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,201] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,201] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,201] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,201] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,202] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,207] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,207] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,207] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,208] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,208] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(Z3t4fWueQ-mCuVhNX6-71A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,217] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,218] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,218] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,218] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,218] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,230] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,231] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,231] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,231] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,232] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,243] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,244] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,244] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,244] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,245] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,251] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,251] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,251] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,252] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,252] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,259] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,260] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,260] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,260] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,260] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,268] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,269] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,269] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,270] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,270] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,276] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,277] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,277] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,277] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,277] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,284] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,285] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,286] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,286] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,286] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,293] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,294] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,294] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,294] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,294] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,301] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,302] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,302] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,302] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,302] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,311] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,312] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,312] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,312] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,312] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,319] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,319] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,320] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,320] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,320] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,329] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,329] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,329] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,329] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,329] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,338] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,338] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,338] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,338] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,339] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,346] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,347] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,347] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,347] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,347] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,358] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:54 kafka | [2024-02-27 23:14:55,359] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:54 kafka | [2024-02-27 23:14:55,359] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,360] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:54 kafka | [2024-02-27 23:14:55,360] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(RvhKTaXTQ2ueiwbitViXeA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,365] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,366] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,376] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,380] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,383] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,384] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,385] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,388] INFO [Broker id=1] Finished LeaderAndIsr request in 639ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,390] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 9 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,391] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,391] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,391] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,391] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,392] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,392] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,392] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,392] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,393] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,393] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,393] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,393] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,394] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,394] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,394] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,394] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,395] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,395] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,395] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,395] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,396] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,396] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,396] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,396] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,396] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,396] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,397] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,397] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,397] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,397] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,397] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,397] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,397] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,398] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,398] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,398] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,398] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,398] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,398] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,398] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=RvhKTaXTQ2ueiwbitViXeA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=Z3t4fWueQ-mCuVhNX6-71A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,399] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,399] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,399] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,399] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,399] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,400] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,400] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,400] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,400] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,400] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,403] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,404] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,405] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,406] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,407] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:54 kafka | [2024-02-27 23:14:55,474] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group fd3c6b2f-e961-4dee-b92a-5df6cab870fa in Empty state. Created a new member id consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3-e4378968-77ad-4b63-8cfc-d4149cd89a93 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,486] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-e7b834ec-6ce7-4813-ad09-40e41dbc774c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,494] INFO [GroupCoordinator 1]: Preparing to rebalance group fd3c6b2f-e961-4dee-b92a-5df6cab870fa in state PreparingRebalance with old generation 0 (__consumer_offsets-30) (reason: Adding new member consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3-e4378968-77ad-4b63-8cfc-d4149cd89a93 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:55,496] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-e7b834ec-6ce7-4813-ad09-40e41dbc774c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:56,077] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 2e9a8db0-5ced-4fac-ad85-e31c5601b919 in Empty state. Created a new member id consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2-b8f1b9fb-2b36-44b6-b54b-f1ddb8d6f785 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:56,081] INFO [GroupCoordinator 1]: Preparing to rebalance group 2e9a8db0-5ced-4fac-ad85-e31c5601b919 in state PreparingRebalance with old generation 0 (__consumer_offsets-9) (reason: Adding new member consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2-b8f1b9fb-2b36-44b6-b54b-f1ddb8d6f785 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:58,507] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:58,511] INFO [GroupCoordinator 1]: Stabilized group fd3c6b2f-e961-4dee-b92a-5df6cab870fa generation 1 (__consumer_offsets-30) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:58,530] INFO [GroupCoordinator 1]: Assignment received from leader consumer-fd3c6b2f-e961-4dee-b92a-5df6cab870fa-3-e4378968-77ad-4b63-8cfc-d4149cd89a93 for group fd3c6b2f-e961-4dee-b92a-5df6cab870fa for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:58,530] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-e7b834ec-6ce7-4813-ad09-40e41dbc774c for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:59,081] INFO [GroupCoordinator 1]: Stabilized group 2e9a8db0-5ced-4fac-ad85-e31c5601b919 generation 1 (__consumer_offsets-9) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:54 kafka | [2024-02-27 23:14:59,097] INFO [GroupCoordinator 1]: Assignment received from leader consumer-2e9a8db0-5ced-4fac-ad85-e31c5601b919-2-b8f1b9fb-2b36-44b6-b54b-f1ddb8d6f785 for group 2e9a8db0-5ced-4fac-ad85-e31c5601b919 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:54 ++ echo 'Tearing down containers...' 23:16:54 Tearing down containers... 23:16:54 ++ docker-compose down -v --remove-orphans 23:16:54 Stopping policy-apex-pdp ... 23:16:54 Stopping policy-pap ... 23:16:54 Stopping policy-api ... 23:16:54 Stopping kafka ... 23:16:54 Stopping grafana ... 23:16:54 Stopping compose_zookeeper_1 ... 23:16:54 Stopping simulator ... 23:16:54 Stopping prometheus ... 23:16:54 Stopping mariadb ... 23:16:55 Stopping grafana ... done 23:16:55 Stopping prometheus ... done 23:17:05 Stopping policy-apex-pdp ... done 23:17:15 Stopping simulator ... done 23:17:15 Stopping policy-pap ... done 23:17:16 Stopping mariadb ... done 23:17:16 Stopping kafka ... done 23:17:17 Stopping compose_zookeeper_1 ... done 23:17:26 Stopping policy-api ... done 23:17:26 Removing policy-apex-pdp ... 23:17:26 Removing policy-pap ... 23:17:26 Removing policy-api ... 23:17:26 Removing policy-db-migrator ... 23:17:26 Removing kafka ... 23:17:26 Removing grafana ... 23:17:26 Removing compose_zookeeper_1 ... 23:17:26 Removing simulator ... 23:17:26 Removing prometheus ... 23:17:26 Removing mariadb ... 23:17:26 Removing policy-apex-pdp ... done 23:17:26 Removing grafana ... done 23:17:26 Removing policy-api ... done 23:17:26 Removing policy-db-migrator ... done 23:17:26 Removing kafka ... done 23:17:26 Removing policy-pap ... done 23:17:26 Removing simulator ... done 23:17:26 Removing mariadb ... done 23:17:26 Removing prometheus ... done 23:17:26 Removing compose_zookeeper_1 ... done 23:17:26 Removing network compose_default 23:17:26 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:26 + load_set 23:17:26 + _setopts=hxB 23:17:26 ++ tr : ' ' 23:17:26 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:26 + set +o braceexpand 23:17:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:26 + set +o hashall 23:17:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:26 + set +o interactive-comments 23:17:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:26 + set +o xtrace 23:17:26 ++ echo hxB 23:17:26 ++ sed 's/./& /g' 23:17:26 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:26 + set +h 23:17:26 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:26 + set +x 23:17:26 + [[ -n /tmp/tmp.KIs5KFRoI8 ]] 23:17:26 + rsync -av /tmp/tmp.KIs5KFRoI8/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:26 sending incremental file list 23:17:26 ./ 23:17:26 log.html 23:17:26 output.xml 23:17:26 report.html 23:17:26 testplan.txt 23:17:26 23:17:26 sent 918,747 bytes received 95 bytes 1,837,684.00 bytes/sec 23:17:26 total size is 918,201 speedup is 1.00 23:17:26 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:26 + exit 0 23:17:26 $ ssh-agent -k 23:17:26 unset SSH_AUTH_SOCK; 23:17:26 unset SSH_AGENT_PID; 23:17:26 echo Agent pid 2156 killed; 23:17:26 [ssh-agent] Stopped. 23:17:26 Robot results publisher started... 23:17:26 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:26 -Parsing output xml: 23:17:27 Done! 23:17:27 WARNING! Could not find file: **/log.html 23:17:27 WARNING! Could not find file: **/report.html 23:17:27 -Copying log files to build dir: 23:17:27 Done! 23:17:27 -Assigning results to build: 23:17:27 Done! 23:17:27 -Checking thresholds: 23:17:27 Done! 23:17:27 Done publishing Robot results. 23:17:27 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:27 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2522793941024119095.sh 23:17:27 ---> sysstat.sh 23:17:27 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16998859592034449817.sh 23:17:27 ---> package-listing.sh 23:17:27 ++ facter osfamily 23:17:27 ++ tr '[:upper:]' '[:lower:]' 23:17:28 + OS_FAMILY=debian 23:17:28 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:28 + START_PACKAGES=/tmp/packages_start.txt 23:17:28 + END_PACKAGES=/tmp/packages_end.txt 23:17:28 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:28 + PACKAGES=/tmp/packages_start.txt 23:17:28 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:28 + PACKAGES=/tmp/packages_end.txt 23:17:28 + case "${OS_FAMILY}" in 23:17:28 + dpkg -l 23:17:28 + grep '^ii' 23:17:28 + '[' -f /tmp/packages_start.txt ']' 23:17:28 + '[' -f /tmp/packages_end.txt ']' 23:17:28 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:28 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:28 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:28 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:28 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12948119861937831234.sh 23:17:28 ---> capture-instance-metadata.sh 23:17:28 Setup pyenv: 23:17:28 system 23:17:28 3.8.13 23:17:28 3.9.13 23:17:28 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:28 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-biv1 from file:/tmp/.os_lf_venv 23:17:29 lf-activate-venv(): INFO: Installing: lftools 23:17:40 lf-activate-venv(): INFO: Adding /tmp/venv-biv1/bin to PATH 23:17:40 INFO: Running in OpenStack, capturing instance metadata 23:17:41 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9675983173499823143.sh 23:17:41 provisioning config files... 23:17:41 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config3503769040805447867tmp 23:17:41 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:41 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:41 [EnvInject] - Injecting environment variables from a build step. 23:17:41 [EnvInject] - Injecting as environment variables the properties content 23:17:41 SERVER_ID=logs 23:17:41 23:17:41 [EnvInject] - Variables injected successfully. 23:17:41 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins616632600480470124.sh 23:17:41 ---> create-netrc.sh 23:17:41 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7409328398472964907.sh 23:17:41 ---> python-tools-install.sh 23:17:41 Setup pyenv: 23:17:41 system 23:17:41 3.8.13 23:17:41 3.9.13 23:17:41 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:41 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-biv1 from file:/tmp/.os_lf_venv 23:17:42 lf-activate-venv(): INFO: Installing: lftools 23:17:50 lf-activate-venv(): INFO: Adding /tmp/venv-biv1/bin to PATH 23:17:50 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15823035382679968010.sh 23:17:50 ---> sudo-logs.sh 23:17:50 Archiving 'sudo' log.. 23:17:50 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11474355243506320724.sh 23:17:50 ---> job-cost.sh 23:17:50 Setup pyenv: 23:17:50 system 23:17:50 3.8.13 23:17:50 3.9.13 23:17:51 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:51 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-biv1 from file:/tmp/.os_lf_venv 23:17:52 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:57 lf-activate-venv(): INFO: Adding /tmp/venv-biv1/bin to PATH 23:17:57 INFO: No Stack... 23:17:58 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:58 INFO: Archiving Costs 23:17:58 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins12844108951178974710.sh 23:17:58 ---> logs-deploy.sh 23:17:58 Setup pyenv: 23:17:58 system 23:17:58 3.8.13 23:17:58 3.9.13 23:17:58 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:58 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-biv1 from file:/tmp/.os_lf_venv 23:18:00 lf-activate-venv(): INFO: Installing: lftools 23:18:07 lf-activate-venv(): INFO: Adding /tmp/venv-biv1/bin to PATH 23:18:07 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1593 23:18:07 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:09 Archives upload complete. 23:18:09 INFO: archiving logs to Nexus 23:18:09 ---> uname -a: 23:18:09 Linux prd-ubuntu1804-docker-8c-8g-9276 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:09 23:18:09 23:18:09 ---> lscpu: 23:18:09 Architecture: x86_64 23:18:09 CPU op-mode(s): 32-bit, 64-bit 23:18:09 Byte Order: Little Endian 23:18:09 CPU(s): 8 23:18:09 On-line CPU(s) list: 0-7 23:18:09 Thread(s) per core: 1 23:18:09 Core(s) per socket: 1 23:18:09 Socket(s): 8 23:18:09 NUMA node(s): 1 23:18:09 Vendor ID: AuthenticAMD 23:18:09 CPU family: 23 23:18:09 Model: 49 23:18:09 Model name: AMD EPYC-Rome Processor 23:18:09 Stepping: 0 23:18:09 CPU MHz: 2799.998 23:18:09 BogoMIPS: 5599.99 23:18:09 Virtualization: AMD-V 23:18:09 Hypervisor vendor: KVM 23:18:09 Virtualization type: full 23:18:09 L1d cache: 32K 23:18:09 L1i cache: 32K 23:18:09 L2 cache: 512K 23:18:09 L3 cache: 16384K 23:18:09 NUMA node0 CPU(s): 0-7 23:18:09 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:09 23:18:09 23:18:09 ---> nproc: 23:18:09 8 23:18:09 23:18:09 23:18:09 ---> df -h: 23:18:09 Filesystem Size Used Avail Use% Mounted on 23:18:09 udev 16G 0 16G 0% /dev 23:18:09 tmpfs 3.2G 708K 3.2G 1% /run 23:18:09 /dev/vda1 155G 14G 142G 9% / 23:18:09 tmpfs 16G 0 16G 0% /dev/shm 23:18:09 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:09 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:09 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:09 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:09 23:18:09 23:18:09 ---> free -m: 23:18:09 total used free shared buff/cache available 23:18:09 Mem: 32167 856 25102 0 6207 30854 23:18:09 Swap: 1023 0 1023 23:18:09 23:18:09 23:18:10 ---> ip addr: 23:18:10 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:10 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:10 inet 127.0.0.1/8 scope host lo 23:18:10 valid_lft forever preferred_lft forever 23:18:10 inet6 ::1/128 scope host 23:18:10 valid_lft forever preferred_lft forever 23:18:10 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:10 link/ether fa:16:3e:2b:13:25 brd ff:ff:ff:ff:ff:ff 23:18:10 inet 10.30.107.25/23 brd 10.30.107.255 scope global dynamic ens3 23:18:10 valid_lft 85931sec preferred_lft 85931sec 23:18:10 inet6 fe80::f816:3eff:fe2b:1325/64 scope link 23:18:10 valid_lft forever preferred_lft forever 23:18:10 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:10 link/ether 02:42:01:c5:cc:61 brd ff:ff:ff:ff:ff:ff 23:18:10 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:10 valid_lft forever preferred_lft forever 23:18:10 23:18:10 23:18:10 ---> sar -b -r -n DEV: 23:18:10 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-9276) 02/27/24 _x86_64_ (8 CPU) 23:18:10 23:18:10 23:10:24 LINUX RESTART (8 CPU) 23:18:10 23:18:10 23:11:01 tps rtps wtps bread/s bwrtn/s 23:18:10 23:12:01 110.43 42.30 68.13 1900.03 17646.65 23:18:10 23:13:01 126.11 23.10 103.02 2769.67 23579.94 23:18:10 23:14:01 211.23 0.25 210.98 33.33 105768.24 23:18:10 23:15:01 350.27 13.83 336.43 811.60 56864.60 23:18:10 23:16:01 14.58 0.00 14.58 0.00 11075.14 23:18:10 23:17:01 18.93 0.05 18.88 4.53 12119.33 23:18:10 23:18:01 79.20 2.83 76.37 129.05 14328.10 23:18:10 Average: 130.10 11.77 118.33 806.89 34481.67 23:18:10 23:18:10 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:10 23:12:01 30145776 31728140 2793436 8.48 68740 1824712 1388388 4.08 841876 1661652 154124 23:18:10 23:13:01 29116720 31663808 3822492 11.60 95940 2731768 1593964 4.69 991740 2473032 699668 23:18:10 23:14:01 25796856 31671988 7142356 21.68 140416 5862216 1473512 4.34 1014600 5600464 1028640 23:18:10 23:15:01 23576936 29617400 9362276 28.42 155784 5993160 8703468 25.61 3247296 5508216 1368 23:18:10 23:16:01 23569252 29610792 9369960 28.45 156084 5993588 8796688 25.88 3258528 5505424 260 23:18:10 23:17:01 23617508 29685640 9321704 28.30 156472 6021656 8077748 23.77 3199128 5519584 208 23:18:10 23:18:01 25725356 31614220 7213856 21.90 159728 5854892 1502772 4.42 1316164 5354096 14128 23:18:10 Average: 25935486 30798855 7003726 21.26 133309 4897427 4505220 13.26 1981333 4517495 271199 23:18:10 23:18:10 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:10 23:12:01 ens3 185.35 121.46 1119.38 37.07 0.00 0.00 0.00 0.00 23:18:10 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:12:01 lo 1.60 1.60 0.17 0.17 0.00 0.00 0.00 0.00 23:18:10 23:13:01 ens3 186.95 128.13 4392.61 14.09 0.00 0.00 0.00 0.00 23:18:10 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:13:01 lo 6.13 6.13 0.57 0.57 0.00 0.00 0.00 0.00 23:18:10 23:13:01 br-4e398ebfcf15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:14:01 ens3 1055.84 576.15 27872.43 43.23 0.00 0.00 0.00 0.00 23:18:10 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:14:01 lo 7.33 7.33 0.73 0.73 0.00 0.00 0.00 0.00 23:18:10 23:14:01 br-4e398ebfcf15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:15:01 ens3 7.97 5.22 1.93 1.64 0.00 0.00 0.00 0.00 23:18:10 23:15:01 veth9492f1f 0.00 0.38 0.00 0.02 0.00 0.00 0.00 0.00 23:18:10 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:15:01 veth2166768 0.22 0.63 0.02 0.21 0.00 0.00 0.00 0.00 23:18:10 23:16:01 ens3 3.47 3.05 0.76 1.04 0.00 0.00 0.00 0.00 23:18:10 23:16:01 veth9492f1f 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:16:01 veth2166768 0.60 0.58 0.05 1.49 0.00 0.00 0.00 0.00 23:18:10 23:17:01 ens3 13.65 13.21 5.74 15.26 0.00 0.00 0.00 0.00 23:18:10 23:17:01 veth9492f1f 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:17:01 veth6760a54 107.35 129.64 77.67 31.83 0.00 0.00 0.00 0.01 23:18:10 23:18:01 ens3 65.16 43.81 88.63 18.81 0.00 0.00 0.00 0.00 23:18:10 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 23:18:01 lo 35.08 35.08 6.21 6.21 0.00 0.00 0.00 0.00 23:18:10 Average: ens3 216.91 127.29 4782.98 18.74 0.00 0.00 0.00 0.00 23:18:10 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:10 Average: lo 4.46 4.46 0.84 0.84 0.00 0.00 0.00 0.00 23:18:10 23:18:10 23:18:10 ---> sar -P ALL: 23:18:10 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-9276) 02/27/24 _x86_64_ (8 CPU) 23:18:10 23:18:10 23:10:24 LINUX RESTART (8 CPU) 23:18:10 23:18:10 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:18:10 23:12:01 all 10.54 0.00 0.85 2.28 0.03 86.29 23:18:10 23:12:01 0 5.87 0.00 0.52 7.82 0.02 85.77 23:18:10 23:12:01 1 12.60 0.00 0.75 0.37 0.03 86.25 23:18:10 23:12:01 2 24.29 0.00 1.64 1.47 0.07 72.54 23:18:10 23:12:01 3 11.96 0.00 1.41 6.03 0.07 80.54 23:18:10 23:12:01 4 8.79 0.00 0.83 0.45 0.03 89.89 23:18:10 23:12:01 5 8.85 0.00 0.52 0.73 0.05 89.84 23:18:10 23:12:01 6 7.79 0.00 0.68 0.72 0.03 90.78 23:18:10 23:12:01 7 4.19 0.00 0.45 0.67 0.02 94.68 23:18:10 23:13:01 all 10.82 0.00 1.55 1.98 0.05 85.61 23:18:10 23:13:01 0 3.29 0.00 1.08 5.12 0.02 90.49 23:18:10 23:13:01 1 2.34 0.00 0.92 0.70 0.03 96.01 23:18:10 23:13:01 2 22.03 0.00 1.92 0.87 0.05 75.13 23:18:10 23:13:01 3 3.12 0.00 1.31 4.69 0.05 90.83 23:18:10 23:13:01 4 32.82 0.00 2.36 1.96 0.07 62.79 23:18:10 23:13:01 5 6.53 0.00 1.22 2.21 0.08 89.95 23:18:10 23:13:01 6 10.02 0.00 2.19 0.22 0.05 87.52 23:18:10 23:13:01 7 6.35 0.00 1.34 0.07 0.03 92.20 23:18:10 23:14:01 all 11.37 0.00 4.68 7.17 0.06 76.72 23:18:10 23:14:01 0 9.30 0.00 2.72 6.96 0.05 80.97 23:18:10 23:14:01 1 12.20 0.00 4.82 0.49 0.03 82.45 23:18:10 23:14:01 2 11.99 0.00 6.59 0.03 0.07 81.32 23:18:10 23:14:01 3 11.27 0.00 3.80 20.81 0.09 64.03 23:18:10 23:14:01 4 11.25 0.00 6.09 18.26 0.07 64.33 23:18:10 23:14:01 5 11.88 0.00 4.50 2.83 0.07 80.73 23:18:10 23:14:01 6 11.72 0.00 4.88 3.28 0.05 80.07 23:18:10 23:14:01 7 11.38 0.00 4.04 4.87 0.05 79.67 23:18:10 23:15:01 all 25.08 0.00 3.33 5.08 0.09 66.42 23:18:10 23:15:01 0 30.43 0.00 3.99 1.57 0.10 63.91 23:18:10 23:15:01 1 28.68 0.00 3.57 1.04 0.10 66.61 23:18:10 23:15:01 2 19.60 0.00 3.17 3.27 0.08 73.88 23:18:10 23:15:01 3 18.94 0.00 2.67 2.13 0.07 76.18 23:18:10 23:15:01 4 26.91 0.00 3.16 2.23 0.10 67.60 23:18:10 23:15:01 5 29.32 0.00 3.59 18.20 0.08 48.80 23:18:10 23:15:01 6 23.03 0.00 3.10 3.76 0.10 70.01 23:18:10 23:15:01 7 23.69 0.00 3.37 8.45 0.10 64.39 23:18:10 23:16:01 all 6.24 0.00 0.56 0.71 0.06 92.43 23:18:10 23:16:01 0 7.44 0.00 0.73 0.07 0.03 91.72 23:18:10 23:16:01 1 8.00 0.00 0.67 0.02 0.05 91.27 23:18:10 23:16:01 2 6.26 0.00 0.58 0.00 0.03 93.12 23:18:10 23:16:01 3 6.31 0.00 0.54 0.28 0.07 92.80 23:18:10 23:16:01 4 8.09 0.00 0.57 0.00 0.07 91.28 23:18:10 23:16:01 5 6.02 0.00 0.47 0.00 0.05 93.46 23:18:10 23:16:01 6 4.20 0.00 0.50 5.20 0.07 90.03 23:18:10 23:16:01 7 3.60 0.00 0.47 0.10 0.12 95.72 23:18:10 23:17:01 all 1.23 0.00 0.31 0.65 0.06 97.76 23:18:10 23:17:01 0 1.10 0.00 0.30 0.20 0.07 98.33 23:18:10 23:17:01 1 1.40 0.00 0.37 0.00 0.07 98.16 23:18:10 23:17:01 2 0.67 0.00 0.27 0.00 0.02 99.05 23:18:10 23:17:01 3 1.27 0.00 0.30 0.22 0.07 98.14 23:18:10 23:17:01 4 1.82 0.00 0.32 0.00 0.03 97.83 23:18:10 23:17:01 5 1.13 0.00 0.28 0.18 0.03 98.36 23:18:10 23:17:01 6 0.80 0.00 0.23 4.58 0.07 94.32 23:18:10 23:17:01 7 1.64 0.00 0.37 0.00 0.08 97.91 23:18:10 23:18:01 all 6.82 0.00 0.66 1.10 0.04 91.39 23:18:10 23:18:01 0 1.82 0.00 0.48 0.13 0.07 97.49 23:18:10 23:18:01 1 0.83 0.00 0.48 0.30 0.03 98.35 23:18:10 23:18:01 2 30.30 0.00 1.22 0.73 0.07 67.68 23:18:10 23:18:01 3 1.59 0.00 0.60 0.89 0.03 96.89 23:18:10 23:18:01 4 4.90 0.00 0.65 1.79 0.03 92.63 23:18:10 23:18:01 5 8.84 0.00 0.75 0.22 0.03 90.15 23:18:10 23:18:01 6 2.65 0.00 0.53 4.72 0.03 92.05 23:18:10 23:18:01 7 3.62 0.00 0.50 0.02 0.03 95.83 23:18:10 Average: all 10.29 0.00 1.70 2.70 0.06 85.26 23:18:10 Average: 0 8.46 0.00 1.40 3.12 0.05 86.96 23:18:10 Average: 1 9.42 0.00 1.65 0.42 0.05 88.46 23:18:10 Average: 2 16.45 0.00 2.19 0.91 0.05 80.39 23:18:10 Average: 3 7.76 0.00 1.51 4.97 0.06 85.70 23:18:10 Average: 4 13.50 0.00 1.99 3.49 0.06 80.96 23:18:10 Average: 5 10.35 0.00 1.61 3.47 0.06 84.50 23:18:10 Average: 6 8.59 0.00 1.72 3.21 0.06 86.42 23:18:10 Average: 7 7.76 0.00 1.50 2.02 0.06 88.66 23:18:10 23:18:10 23:18:10