23:10:59 Started by timer 23:10:59 Running as SYSTEM 23:10:59 [EnvInject] - Loading node environment variables. 23:10:59 Building remotely on prd-ubuntu1804-docker-8c-8g-6858 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:59 [ssh-agent] Looking for ssh-agent implementation... 23:10:59 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:59 $ ssh-agent 23:10:59 SSH_AUTH_SOCK=/tmp/ssh-fyLeWQUi4uCW/agent.2106 23:10:59 SSH_AGENT_PID=2108 23:10:59 [ssh-agent] Started. 23:10:59 Running ssh-add (command line suppressed) 23:10:59 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_560751172686336686.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_560751172686336686.key) 23:10:59 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:59 The recommended git tool is: NONE 23:11:01 using credential onap-jenkins-ssh 23:11:01 Wiping out workspace first. 23:11:01 Cloning the remote Git repository 23:11:01 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:01 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:01 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:01 > git --version # timeout=10 23:11:01 > git --version # 'git version 2.17.1' 23:11:01 using GIT_SSH to set credentials Gerrit user 23:11:01 Verifying host key using manually-configured host key entries 23:11:01 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:01 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:01 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:02 Avoid second fetch 23:11:02 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:02 Checking out Revision dd836dc2d2bd379fba19b395c912d32f1bc7ee38 (refs/remotes/origin/master) 23:11:02 > git config core.sparsecheckout # timeout=10 23:11:02 > git checkout -f dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=30 23:11:02 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" 23:11:02 > git rev-list --no-walk dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=10 23:11:02 provisioning config files... 23:11:02 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:02 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:02 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4083628950245168411.sh 23:11:02 ---> python-tools-install.sh 23:11:02 Setup pyenv: 23:11:02 * system (set by /opt/pyenv/version) 23:11:02 * 3.8.13 (set by /opt/pyenv/version) 23:11:02 * 3.9.13 (set by /opt/pyenv/version) 23:11:02 * 3.10.6 (set by /opt/pyenv/version) 23:11:06 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-xV1d 23:11:06 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:10 lf-activate-venv(): INFO: Installing: lftools 23:11:42 lf-activate-venv(): INFO: Adding /tmp/venv-xV1d/bin to PATH 23:11:42 Generating Requirements File 23:12:18 Python 3.10.6 23:12:18 pip 24.0 from /tmp/venv-xV1d/lib/python3.10/site-packages/pip (python 3.10) 23:12:18 appdirs==1.4.4 23:12:18 argcomplete==3.2.2 23:12:18 aspy.yaml==1.3.0 23:12:18 attrs==23.2.0 23:12:18 autopage==0.5.2 23:12:18 beautifulsoup4==4.12.3 23:12:18 boto3==1.34.45 23:12:18 botocore==1.34.45 23:12:18 bs4==0.0.2 23:12:18 cachetools==5.3.2 23:12:18 certifi==2024.2.2 23:12:18 cffi==1.16.0 23:12:18 cfgv==3.4.0 23:12:18 chardet==5.2.0 23:12:18 charset-normalizer==3.3.2 23:12:18 click==8.1.7 23:12:18 cliff==4.5.0 23:12:18 cmd2==2.4.3 23:12:18 cryptography==3.3.2 23:12:18 debtcollector==2.5.0 23:12:18 decorator==5.1.1 23:12:18 defusedxml==0.7.1 23:12:18 Deprecated==1.2.14 23:12:18 distlib==0.3.8 23:12:18 dnspython==2.6.1 23:12:18 docker==4.2.2 23:12:18 dogpile.cache==1.3.1 23:12:18 email-validator==2.1.0.post1 23:12:18 filelock==3.13.1 23:12:18 future==0.18.3 23:12:18 gitdb==4.0.11 23:12:18 GitPython==3.1.42 23:12:18 google-auth==2.28.0 23:12:18 httplib2==0.22.0 23:12:18 identify==2.5.35 23:12:18 idna==3.6 23:12:18 importlib-resources==1.5.0 23:12:18 iso8601==2.1.0 23:12:18 Jinja2==3.1.3 23:12:18 jmespath==1.0.1 23:12:18 jsonpatch==1.33 23:12:18 jsonpointer==2.4 23:12:18 jsonschema==4.21.1 23:12:18 jsonschema-specifications==2023.12.1 23:12:18 keystoneauth1==5.5.0 23:12:18 kubernetes==29.0.0 23:12:18 lftools==0.37.8 23:12:18 lxml==5.1.0 23:12:18 MarkupSafe==2.1.5 23:12:18 msgpack==1.0.7 23:12:18 multi_key_dict==2.0.3 23:12:18 munch==4.0.0 23:12:18 netaddr==1.2.1 23:12:18 netifaces==0.11.0 23:12:18 niet==1.4.2 23:12:18 nodeenv==1.8.0 23:12:18 oauth2client==4.1.3 23:12:18 oauthlib==3.2.2 23:12:18 openstacksdk==0.62.0 23:12:18 os-client-config==2.1.0 23:12:18 os-service-types==1.7.0 23:12:18 osc-lib==3.0.0 23:12:18 oslo.config==9.3.0 23:12:18 oslo.context==5.3.0 23:12:18 oslo.i18n==6.2.0 23:12:18 oslo.log==5.4.0 23:12:18 oslo.serialization==5.3.0 23:12:18 oslo.utils==7.0.0 23:12:18 packaging==23.2 23:12:18 pbr==6.0.0 23:12:18 platformdirs==4.2.0 23:12:18 prettytable==3.10.0 23:12:18 pyasn1==0.5.1 23:12:18 pyasn1-modules==0.3.0 23:12:18 pycparser==2.21 23:12:18 pygerrit2==2.0.15 23:12:18 PyGithub==2.2.0 23:12:18 pyinotify==0.9.6 23:12:18 PyJWT==2.8.0 23:12:18 PyNaCl==1.5.0 23:12:18 pyparsing==2.4.7 23:12:18 pyperclip==1.8.2 23:12:18 pyrsistent==0.20.0 23:12:18 python-cinderclient==9.4.0 23:12:18 python-dateutil==2.8.2 23:12:18 python-heatclient==3.4.0 23:12:18 python-jenkins==1.8.2 23:12:18 python-keystoneclient==5.3.0 23:12:18 python-magnumclient==4.3.0 23:12:18 python-novaclient==18.4.0 23:12:18 python-openstackclient==6.0.1 23:12:18 python-swiftclient==4.4.0 23:12:18 pytz==2024.1 23:12:18 PyYAML==6.0.1 23:12:18 referencing==0.33.0 23:12:18 requests==2.31.0 23:12:18 requests-oauthlib==1.3.1 23:12:18 requestsexceptions==1.4.0 23:12:18 rfc3986==2.0.0 23:12:18 rpds-py==0.18.0 23:12:18 rsa==4.9 23:12:18 ruamel.yaml==0.18.6 23:12:18 ruamel.yaml.clib==0.2.8 23:12:18 s3transfer==0.10.0 23:12:18 simplejson==3.19.2 23:12:18 six==1.16.0 23:12:18 smmap==5.0.1 23:12:18 soupsieve==2.5 23:12:18 stevedore==5.1.0 23:12:18 tabulate==0.9.0 23:12:18 toml==0.10.2 23:12:18 tomlkit==0.12.3 23:12:18 tqdm==4.66.2 23:12:18 typing_extensions==4.9.0 23:12:18 tzdata==2024.1 23:12:18 urllib3==1.26.18 23:12:18 virtualenv==20.25.0 23:12:18 wcwidth==0.2.13 23:12:18 websocket-client==1.7.0 23:12:18 wrapt==1.16.0 23:12:18 xdg==6.0.0 23:12:18 xmltodict==0.13.0 23:12:18 yq==3.2.3 23:12:18 [EnvInject] - Injecting environment variables from a build step. 23:12:18 [EnvInject] - Injecting as environment variables the properties content 23:12:18 SET_JDK_VERSION=openjdk17 23:12:18 GIT_URL="git://cloud.onap.org/mirror" 23:12:18 23:12:18 [EnvInject] - Variables injected successfully. 23:12:18 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins10409361948059989284.sh 23:12:18 ---> update-java-alternatives.sh 23:12:18 ---> Updating Java version 23:12:19 ---> Ubuntu/Debian system detected 23:12:19 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:19 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:19 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:19 openjdk version "17.0.4" 2022-07-19 23:12:19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:19 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:19 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:19 [EnvInject] - Injecting environment variables from a build step. 23:12:19 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:19 [EnvInject] - Variables injected successfully. 23:12:19 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins17559457207303605458.sh 23:12:19 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:19 + set +u 23:12:19 + save_set 23:12:19 + RUN_CSIT_SAVE_SET=ehxB 23:12:19 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:19 + '[' 1 -eq 0 ']' 23:12:19 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:19 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:19 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:19 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:19 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:19 + export ROBOT_VARIABLES= 23:12:19 + ROBOT_VARIABLES= 23:12:19 + export PROJECT=pap 23:12:19 + PROJECT=pap 23:12:19 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:19 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:19 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:19 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:19 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:19 + relax_set 23:12:19 + set +e 23:12:19 + set +o pipefail 23:12:19 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:19 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:19 +++ mktemp -d 23:12:19 ++ ROBOT_VENV=/tmp/tmp.nIz1duCp0U 23:12:19 ++ echo ROBOT_VENV=/tmp/tmp.nIz1duCp0U 23:12:19 +++ python3 --version 23:12:19 ++ echo 'Python version is: Python 3.6.9' 23:12:19 Python version is: Python 3.6.9 23:12:19 ++ python3 -m venv --clear /tmp/tmp.nIz1duCp0U 23:12:21 ++ source /tmp/tmp.nIz1duCp0U/bin/activate 23:12:21 +++ deactivate nondestructive 23:12:21 +++ '[' -n '' ']' 23:12:21 +++ '[' -n '' ']' 23:12:21 +++ '[' -n /bin/bash -o -n '' ']' 23:12:21 +++ hash -r 23:12:21 +++ '[' -n '' ']' 23:12:21 +++ unset VIRTUAL_ENV 23:12:21 +++ '[' '!' nondestructive = nondestructive ']' 23:12:21 +++ VIRTUAL_ENV=/tmp/tmp.nIz1duCp0U 23:12:21 +++ export VIRTUAL_ENV 23:12:21 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:21 +++ PATH=/tmp/tmp.nIz1duCp0U/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:21 +++ export PATH 23:12:21 +++ '[' -n '' ']' 23:12:21 +++ '[' -z '' ']' 23:12:21 +++ _OLD_VIRTUAL_PS1= 23:12:21 +++ '[' 'x(tmp.nIz1duCp0U) ' '!=' x ']' 23:12:21 +++ PS1='(tmp.nIz1duCp0U) ' 23:12:21 +++ export PS1 23:12:21 +++ '[' -n /bin/bash -o -n '' ']' 23:12:21 +++ hash -r 23:12:21 ++ set -exu 23:12:21 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:24 ++ echo 'Installing Python Requirements' 23:12:24 Installing Python Requirements 23:12:24 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:42 ++ python3 -m pip -qq freeze 23:12:43 bcrypt==4.0.1 23:12:43 beautifulsoup4==4.12.3 23:12:43 bitarray==2.9.2 23:12:43 certifi==2024.2.2 23:12:43 cffi==1.15.1 23:12:43 charset-normalizer==2.0.12 23:12:43 cryptography==40.0.2 23:12:43 decorator==5.1.1 23:12:43 elasticsearch==7.17.9 23:12:43 elasticsearch-dsl==7.4.1 23:12:43 enum34==1.1.10 23:12:43 idna==3.6 23:12:43 importlib-resources==5.4.0 23:12:43 ipaddr==2.2.0 23:12:43 isodate==0.6.1 23:12:43 jmespath==0.10.0 23:12:43 jsonpatch==1.32 23:12:43 jsonpath-rw==1.4.0 23:12:43 jsonpointer==2.3 23:12:43 lxml==5.1.0 23:12:43 netaddr==0.8.0 23:12:43 netifaces==0.11.0 23:12:43 odltools==0.1.28 23:12:43 paramiko==3.4.0 23:12:43 pkg_resources==0.0.0 23:12:43 ply==3.11 23:12:43 pyang==2.6.0 23:12:43 pyangbind==0.8.1 23:12:43 pycparser==2.21 23:12:43 pyhocon==0.3.60 23:12:43 PyNaCl==1.5.0 23:12:43 pyparsing==3.1.1 23:12:43 python-dateutil==2.8.2 23:12:43 regex==2023.8.8 23:12:43 requests==2.27.1 23:12:43 robotframework==6.1.1 23:12:43 robotframework-httplibrary==0.4.2 23:12:43 robotframework-pythonlibcore==3.0.0 23:12:43 robotframework-requests==0.9.4 23:12:43 robotframework-selenium2library==3.0.0 23:12:43 robotframework-seleniumlibrary==5.1.3 23:12:43 robotframework-sshlibrary==3.8.0 23:12:43 scapy==2.5.0 23:12:43 scp==0.14.5 23:12:43 selenium==3.141.0 23:12:43 six==1.16.0 23:12:43 soupsieve==2.3.2.post1 23:12:43 urllib3==1.26.18 23:12:43 waitress==2.0.0 23:12:43 WebOb==1.8.7 23:12:43 WebTest==3.0.0 23:12:43 zipp==3.6.0 23:12:43 ++ mkdir -p /tmp/tmp.nIz1duCp0U/src/onap 23:12:43 ++ rm -rf /tmp/tmp.nIz1duCp0U/src/onap/testsuite 23:12:43 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:49 ++ echo 'Installing python confluent-kafka library' 23:12:49 Installing python confluent-kafka library 23:12:49 ++ python3 -m pip install -qq confluent-kafka 23:12:50 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:50 Uninstall docker-py and reinstall docker. 23:12:50 ++ python3 -m pip uninstall -y -qq docker 23:12:51 ++ python3 -m pip install -U -qq docker 23:12:52 ++ python3 -m pip -qq freeze 23:12:52 bcrypt==4.0.1 23:12:52 beautifulsoup4==4.12.3 23:12:52 bitarray==2.9.2 23:12:52 certifi==2024.2.2 23:12:52 cffi==1.15.1 23:12:52 charset-normalizer==2.0.12 23:12:52 confluent-kafka==2.3.0 23:12:52 cryptography==40.0.2 23:12:52 decorator==5.1.1 23:12:52 deepdiff==5.7.0 23:12:52 dnspython==2.2.1 23:12:52 docker==5.0.3 23:12:52 elasticsearch==7.17.9 23:12:52 elasticsearch-dsl==7.4.1 23:12:52 enum34==1.1.10 23:12:52 future==0.18.3 23:12:52 idna==3.6 23:12:52 importlib-resources==5.4.0 23:12:52 ipaddr==2.2.0 23:12:52 isodate==0.6.1 23:12:52 Jinja2==3.0.3 23:12:52 jmespath==0.10.0 23:12:52 jsonpatch==1.32 23:12:52 jsonpath-rw==1.4.0 23:12:52 jsonpointer==2.3 23:12:52 kafka-python==2.0.2 23:12:52 lxml==5.1.0 23:12:52 MarkupSafe==2.0.1 23:12:52 more-itertools==5.0.0 23:12:52 netaddr==0.8.0 23:12:52 netifaces==0.11.0 23:12:52 odltools==0.1.28 23:12:52 ordered-set==4.0.2 23:12:52 paramiko==3.4.0 23:12:52 pbr==6.0.0 23:12:52 pkg_resources==0.0.0 23:12:52 ply==3.11 23:12:52 protobuf==3.19.6 23:12:52 pyang==2.6.0 23:12:52 pyangbind==0.8.1 23:12:52 pycparser==2.21 23:12:52 pyhocon==0.3.60 23:12:52 PyNaCl==1.5.0 23:12:52 pyparsing==3.1.1 23:12:52 python-dateutil==2.8.2 23:12:52 PyYAML==6.0.1 23:12:52 regex==2023.8.8 23:12:52 requests==2.27.1 23:12:52 robotframework==6.1.1 23:12:52 robotframework-httplibrary==0.4.2 23:12:52 robotframework-onap==0.6.0.dev105 23:12:52 robotframework-pythonlibcore==3.0.0 23:12:52 robotframework-requests==0.9.4 23:12:52 robotframework-selenium2library==3.0.0 23:12:52 robotframework-seleniumlibrary==5.1.3 23:12:52 robotframework-sshlibrary==3.8.0 23:12:52 robotlibcore-temp==1.0.2 23:12:52 scapy==2.5.0 23:12:52 scp==0.14.5 23:12:52 selenium==3.141.0 23:12:52 six==1.16.0 23:12:52 soupsieve==2.3.2.post1 23:12:52 urllib3==1.26.18 23:12:52 waitress==2.0.0 23:12:52 WebOb==1.8.7 23:12:52 websocket-client==1.3.1 23:12:52 WebTest==3.0.0 23:12:52 zipp==3.6.0 23:12:52 ++ uname 23:12:52 ++ grep -q Linux 23:12:52 ++ sudo apt-get -y -qq install libxml2-utils 23:12:53 + load_set 23:12:53 + _setopts=ehuxB 23:12:53 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:53 ++ tr : ' ' 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o braceexpand 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o hashall 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o interactive-comments 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o nounset 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o xtrace 23:12:53 ++ echo ehuxB 23:12:53 ++ sed 's/./& /g' 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +e 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +h 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +u 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +x 23:12:53 + source_safely /tmp/tmp.nIz1duCp0U/bin/activate 23:12:53 + '[' -z /tmp/tmp.nIz1duCp0U/bin/activate ']' 23:12:53 + relax_set 23:12:53 + set +e 23:12:53 + set +o pipefail 23:12:53 + . /tmp/tmp.nIz1duCp0U/bin/activate 23:12:53 ++ deactivate nondestructive 23:12:53 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:53 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:53 ++ export PATH 23:12:53 ++ unset _OLD_VIRTUAL_PATH 23:12:53 ++ '[' -n '' ']' 23:12:53 ++ '[' -n /bin/bash -o -n '' ']' 23:12:53 ++ hash -r 23:12:53 ++ '[' -n '' ']' 23:12:53 ++ unset VIRTUAL_ENV 23:12:53 ++ '[' '!' nondestructive = nondestructive ']' 23:12:53 ++ VIRTUAL_ENV=/tmp/tmp.nIz1duCp0U 23:12:53 ++ export VIRTUAL_ENV 23:12:53 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:53 ++ PATH=/tmp/tmp.nIz1duCp0U/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:53 ++ export PATH 23:12:53 ++ '[' -n '' ']' 23:12:53 ++ '[' -z '' ']' 23:12:53 ++ _OLD_VIRTUAL_PS1='(tmp.nIz1duCp0U) ' 23:12:53 ++ '[' 'x(tmp.nIz1duCp0U) ' '!=' x ']' 23:12:53 ++ PS1='(tmp.nIz1duCp0U) (tmp.nIz1duCp0U) ' 23:12:53 ++ export PS1 23:12:53 ++ '[' -n /bin/bash -o -n '' ']' 23:12:53 ++ hash -r 23:12:53 + load_set 23:12:53 + _setopts=hxB 23:12:53 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:53 ++ tr : ' ' 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o braceexpand 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o hashall 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o interactive-comments 23:12:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:53 + set +o xtrace 23:12:53 ++ echo hxB 23:12:53 ++ sed 's/./& /g' 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +h 23:12:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:53 + set +x 23:12:53 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:53 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:53 + export TEST_OPTIONS= 23:12:53 + TEST_OPTIONS= 23:12:53 ++ mktemp -d 23:12:53 + WORKDIR=/tmp/tmp.hHiucWoJXw 23:12:53 + cd /tmp/tmp.hHiucWoJXw 23:12:53 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:53 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:53 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:53 Configure a credential helper to remove this warning. See 23:12:53 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:53 23:12:53 Login Succeeded 23:12:53 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:53 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:53 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:53 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:53 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:53 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:53 + relax_set 23:12:53 + set +e 23:12:53 + set +o pipefail 23:12:53 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:53 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:53 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:53 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:53 +++ GERRIT_BRANCH=master 23:12:53 +++ echo GERRIT_BRANCH=master 23:12:53 GERRIT_BRANCH=master 23:12:53 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:53 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:53 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:53 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:54 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:54 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:54 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:54 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:54 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:54 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:54 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:54 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:54 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:54 +++ grafana=false 23:12:54 +++ gui=false 23:12:54 +++ [[ 2 -gt 0 ]] 23:12:54 +++ key=apex-pdp 23:12:54 +++ case $key in 23:12:54 +++ echo apex-pdp 23:12:54 apex-pdp 23:12:54 +++ component=apex-pdp 23:12:54 +++ shift 23:12:54 +++ [[ 1 -gt 0 ]] 23:12:54 +++ key=--grafana 23:12:54 +++ case $key in 23:12:54 +++ grafana=true 23:12:54 +++ shift 23:12:54 +++ [[ 0 -gt 0 ]] 23:12:54 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:54 +++ echo 'Configuring docker compose...' 23:12:54 Configuring docker compose... 23:12:54 +++ source export-ports.sh 23:12:54 +++ source get-versions.sh 23:12:56 +++ '[' -z pap ']' 23:12:56 +++ '[' -n apex-pdp ']' 23:12:56 +++ '[' apex-pdp == logs ']' 23:12:56 +++ '[' true = true ']' 23:12:56 +++ echo 'Starting apex-pdp application with Grafana' 23:12:56 Starting apex-pdp application with Grafana 23:12:56 +++ docker-compose up -d apex-pdp grafana 23:12:57 Creating network "compose_default" with the default driver 23:12:57 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:57 latest: Pulling from prom/prometheus 23:13:00 Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 23:13:00 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:13:00 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:13:00 latest: Pulling from grafana/grafana 23:13:05 Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 23:13:05 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:05 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:05 10.10.2: Pulling from mariadb 23:13:10 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:10 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:10 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:10 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:14 Digest: sha256:296577cad1791ddae720c19e5a96c4f6dfea1eb6f9a0aba78ec9d1ac886fa3a4 23:13:14 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:14 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:14 latest: Pulling from confluentinc/cp-zookeeper 23:13:26 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:26 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:26 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:26 latest: Pulling from confluentinc/cp-kafka 23:13:28 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:28 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:28 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:29 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:34 Digest: sha256:d2876ccda69cc445de980a3d4765cb553f81049d67cc6056cfa9e5429597baa6 23:13:34 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:35 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:35 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:37 Digest: sha256:78a40fb24ed4d3cee4ce259c77b5dd4ea7c5808a9213d88dd227e26e4f302016 23:13:37 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:37 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:37 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:39 Digest: sha256:1999687a3a7904992c4686afb8b854bbc7221d3c1a80889c66ccaff2973b9dd9 23:13:39 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:39 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:39 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:50 Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 23:13:50 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:50 Creating simulator ... 23:13:50 Creating compose_zookeeper_1 ... 23:13:50 Creating prometheus ... 23:13:50 Creating mariadb ... 23:14:05 Creating simulator ... done 23:14:06 Creating mariadb ... done 23:14:06 Creating policy-db-migrator ... 23:14:07 Creating policy-db-migrator ... done 23:14:07 Creating policy-api ... 23:14:08 Creating policy-api ... done 23:14:09 Creating compose_zookeeper_1 ... done 23:14:09 Creating kafka ... 23:14:10 Creating kafka ... done 23:14:10 Creating policy-pap ... 23:14:11 Creating policy-pap ... done 23:14:11 Creating policy-apex-pdp ... 23:14:12 Creating policy-apex-pdp ... done 23:14:13 Creating prometheus ... done 23:14:13 Creating grafana ... 23:14:14 Creating grafana ... done 23:14:14 +++ echo 'Prometheus server: http://localhost:30259' 23:14:14 Prometheus server: http://localhost:30259 23:14:14 +++ echo 'Grafana server: http://localhost:30269' 23:14:14 Grafana server: http://localhost:30269 23:14:14 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:14 ++ sleep 10 23:14:24 ++ unset http_proxy https_proxy 23:14:24 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:24 Waiting for REST to come up on localhost port 30003... 23:14:24 NAMES STATUS 23:14:24 grafana Up 10 seconds 23:14:24 policy-apex-pdp Up 12 seconds 23:14:24 policy-pap Up 13 seconds 23:14:24 kafka Up 14 seconds 23:14:24 policy-api Up 16 seconds 23:14:24 mariadb Up 18 seconds 23:14:24 prometheus Up 11 seconds 23:14:24 compose_zookeeper_1 Up 15 seconds 23:14:24 simulator Up 19 seconds 23:14:30 NAMES STATUS 23:14:30 grafana Up 15 seconds 23:14:30 policy-apex-pdp Up 17 seconds 23:14:30 policy-pap Up 18 seconds 23:14:30 kafka Up 19 seconds 23:14:30 policy-api Up 21 seconds 23:14:30 mariadb Up 23 seconds 23:14:30 prometheus Up 16 seconds 23:14:30 compose_zookeeper_1 Up 20 seconds 23:14:30 simulator Up 24 seconds 23:14:35 NAMES STATUS 23:14:35 grafana Up 20 seconds 23:14:35 policy-apex-pdp Up 22 seconds 23:14:35 policy-pap Up 23 seconds 23:14:35 kafka Up 24 seconds 23:14:35 policy-api Up 26 seconds 23:14:35 mariadb Up 28 seconds 23:14:35 prometheus Up 21 seconds 23:14:35 compose_zookeeper_1 Up 25 seconds 23:14:35 simulator Up 29 seconds 23:14:40 NAMES STATUS 23:14:40 grafana Up 25 seconds 23:14:40 policy-apex-pdp Up 27 seconds 23:14:40 policy-pap Up 28 seconds 23:14:40 kafka Up 29 seconds 23:14:40 policy-api Up 31 seconds 23:14:40 mariadb Up 33 seconds 23:14:40 prometheus Up 26 seconds 23:14:40 compose_zookeeper_1 Up 30 seconds 23:14:40 simulator Up 34 seconds 23:14:45 NAMES STATUS 23:14:45 grafana Up 30 seconds 23:14:45 policy-apex-pdp Up 32 seconds 23:14:45 policy-pap Up 33 seconds 23:14:45 kafka Up 34 seconds 23:14:45 policy-api Up 36 seconds 23:14:45 mariadb Up 38 seconds 23:14:45 prometheus Up 31 seconds 23:14:45 compose_zookeeper_1 Up 35 seconds 23:14:45 simulator Up 39 seconds 23:14:45 ++ export 'SUITES=pap-test.robot 23:14:45 pap-slas.robot' 23:14:45 ++ SUITES='pap-test.robot 23:14:45 pap-slas.robot' 23:14:45 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:45 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:45 + load_set 23:14:45 + _setopts=hxB 23:14:45 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:45 ++ tr : ' ' 23:14:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:45 + set +o braceexpand 23:14:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:45 + set +o hashall 23:14:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:45 + set +o interactive-comments 23:14:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:45 + set +o xtrace 23:14:45 ++ echo hxB 23:14:45 ++ sed 's/./& /g' 23:14:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:45 + set +h 23:14:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:45 + set +x 23:14:45 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:45 + docker_stats 23:14:45 ++ uname -s 23:14:45 + '[' Linux == Darwin ']' 23:14:45 + sh -c 'top -bn1 | head -3' 23:14:45 top - 23:14:45 up 4 min, 0 users, load average: 3.38, 1.48, 0.58 23:14:45 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:45 %Cpu(s): 13.2 us, 2.7 sy, 0.0 ni, 80.1 id, 3.8 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:45 + echo 23:14:45 + sh -c 'free -h' 23:14:45 23:14:45 total used free shared buff/cache available 23:14:45 Mem: 31G 2.6G 22G 1.3M 6.0G 28G 23:14:45 Swap: 1.0G 0B 1.0G 23:14:45 + echo 23:14:45 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:45 23:14:45 NAMES STATUS 23:14:45 grafana Up 30 seconds 23:14:45 policy-apex-pdp Up 32 seconds 23:14:45 policy-pap Up 33 seconds 23:14:45 kafka Up 34 seconds 23:14:45 policy-api Up 36 seconds 23:14:45 mariadb Up 38 seconds 23:14:45 prometheus Up 31 seconds 23:14:45 compose_zookeeper_1 Up 35 seconds 23:14:45 simulator Up 39 seconds 23:14:45 + echo 23:14:45 + docker stats --no-stream 23:14:45 23:14:48 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:48 187c593b3718 grafana 0.02% 58.08MiB / 31.41GiB 0.18% 18kB / 3.12kB 0B / 24.1MB 20 23:14:48 ca3a43bef73a policy-apex-pdp 366.70% 126.4MiB / 31.41GiB 0.39% 4.93kB / 4.02kB 0B / 0B 49 23:14:48 d6b8ebbe1c61 policy-pap 1.97% 592MiB / 31.41GiB 1.84% 28.6kB / 30.6kB 0B / 153MB 65 23:14:48 46da5a40c52c kafka 41.48% 378.1MiB / 31.41GiB 1.18% 69.1kB / 72.8kB 0B / 508kB 83 23:14:48 967b9e6da63c policy-api 0.11% 420.2MiB / 31.41GiB 1.31% 1MB / 737kB 0B / 0B 54 23:14:48 f842315ee7bf mariadb 0.02% 101.7MiB / 31.41GiB 0.32% 995kB / 1.19MB 11.1MB / 47.7MB 37 23:14:48 de437c19f39c prometheus 0.00% 18.96MiB / 31.41GiB 0.06% 27.5kB / 1.09kB 0B / 0B 14 23:14:48 da0f88fc9932 compose_zookeeper_1 0.10% 103.7MiB / 31.41GiB 0.32% 55.6kB / 48.4kB 0B / 360kB 59 23:14:48 36eed7c77d1d simulator 0.07% 122MiB / 31.41GiB 0.38% 1.67kB / 0B 4.1kB / 0B 76 23:14:48 + echo 23:14:48 23:14:48 + cd /tmp/tmp.hHiucWoJXw 23:14:48 + echo 'Reading the testplan:' 23:14:48 Reading the testplan: 23:14:48 + echo 'pap-test.robot 23:14:48 pap-slas.robot' 23:14:48 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:48 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:48 + cat testplan.txt 23:14:48 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:48 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:48 ++ xargs 23:14:48 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:48 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:48 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:48 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:48 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:48 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:48 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:48 + relax_set 23:14:48 + set +e 23:14:48 + set +o pipefail 23:14:48 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:48 ============================================================================== 23:14:48 pap 23:14:48 ============================================================================== 23:14:48 pap.Pap-Test 23:14:48 ============================================================================== 23:14:49 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:49 ------------------------------------------------------------------------------ 23:14:49 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:49 ------------------------------------------------------------------------------ 23:14:50 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:50 ------------------------------------------------------------------------------ 23:14:50 Healthcheck :: Verify policy pap health check | PASS | 23:14:50 ------------------------------------------------------------------------------ 23:15:10 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:10 ------------------------------------------------------------------------------ 23:15:11 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:11 ------------------------------------------------------------------------------ 23:15:11 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:11 ------------------------------------------------------------------------------ 23:15:11 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:11 ------------------------------------------------------------------------------ 23:15:11 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:11 ------------------------------------------------------------------------------ 23:15:12 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:12 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:12 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:12 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:12 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:13 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:13 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:13 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:33 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:33 ------------------------------------------------------------------------------ 23:15:33 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:33 ------------------------------------------------------------------------------ 23:15:34 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:34 ------------------------------------------------------------------------------ 23:15:34 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:34 ------------------------------------------------------------------------------ 23:15:34 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:34 ------------------------------------------------------------------------------ 23:15:34 pap.Pap-Test | PASS | 23:15:34 22 tests, 22 passed, 0 failed 23:15:34 ============================================================================== 23:15:34 pap.Pap-Slas 23:15:34 ============================================================================== 23:16:34 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:34 ------------------------------------------------------------------------------ 23:16:34 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:34 ------------------------------------------------------------------------------ 23:16:34 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:34 ------------------------------------------------------------------------------ 23:16:34 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:34 ------------------------------------------------------------------------------ 23:16:34 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:34 ------------------------------------------------------------------------------ 23:16:34 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:34 ------------------------------------------------------------------------------ 23:16:34 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:34 ------------------------------------------------------------------------------ 23:16:34 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:34 ------------------------------------------------------------------------------ 23:16:34 pap.Pap-Slas | PASS | 23:16:34 8 tests, 8 passed, 0 failed 23:16:34 ============================================================================== 23:16:34 pap | PASS | 23:16:34 30 tests, 30 passed, 0 failed 23:16:34 ============================================================================== 23:16:34 Output: /tmp/tmp.hHiucWoJXw/output.xml 23:16:34 Log: /tmp/tmp.hHiucWoJXw/log.html 23:16:34 Report: /tmp/tmp.hHiucWoJXw/report.html 23:16:34 + RESULT=0 23:16:34 + load_set 23:16:34 + _setopts=hxB 23:16:34 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:34 ++ tr : ' ' 23:16:34 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:34 + set +o braceexpand 23:16:34 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:34 + set +o hashall 23:16:34 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:34 + set +o interactive-comments 23:16:34 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:34 + set +o xtrace 23:16:34 ++ echo hxB 23:16:34 ++ sed 's/./& /g' 23:16:34 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:34 + set +h 23:16:34 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:34 + set +x 23:16:34 + echo 'RESULT: 0' 23:16:34 RESULT: 0 23:16:34 + exit 0 23:16:34 + on_exit 23:16:34 + rc=0 23:16:34 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:34 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:34 NAMES STATUS 23:16:34 grafana Up 2 minutes 23:16:34 policy-apex-pdp Up 2 minutes 23:16:34 policy-pap Up 2 minutes 23:16:34 kafka Up 2 minutes 23:16:34 policy-api Up 2 minutes 23:16:34 mariadb Up 2 minutes 23:16:34 prometheus Up 2 minutes 23:16:34 compose_zookeeper_1 Up 2 minutes 23:16:34 simulator Up 2 minutes 23:16:34 + docker_stats 23:16:34 ++ uname -s 23:16:34 + '[' Linux == Darwin ']' 23:16:34 + sh -c 'top -bn1 | head -3' 23:16:34 top - 23:16:34 up 6 min, 0 users, load average: 0.95, 1.25, 0.61 23:16:34 Tasks: 200 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:34 %Cpu(s): 10.8 us, 2.1 sy, 0.0 ni, 84.1 id, 3.0 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:34 + echo 23:16:34 23:16:34 + sh -c 'free -h' 23:16:34 total used free shared buff/cache available 23:16:34 Mem: 31G 2.7G 22G 1.3M 6.0G 28G 23:16:34 Swap: 1.0G 0B 1.0G 23:16:34 + echo 23:16:34 23:16:34 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:34 NAMES STATUS 23:16:34 grafana Up 2 minutes 23:16:34 policy-apex-pdp Up 2 minutes 23:16:34 policy-pap Up 2 minutes 23:16:34 kafka Up 2 minutes 23:16:34 policy-api Up 2 minutes 23:16:34 mariadb Up 2 minutes 23:16:34 prometheus Up 2 minutes 23:16:34 compose_zookeeper_1 Up 2 minutes 23:16:34 simulator Up 2 minutes 23:16:35 + echo 23:16:35 23:16:35 + docker stats --no-stream 23:16:37 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:37 187c593b3718 grafana 0.02% 65.09MiB / 31.41GiB 0.20% 19kB / 4.26kB 0B / 24.1MB 20 23:16:37 ca3a43bef73a policy-apex-pdp 0.60% 189.2MiB / 31.41GiB 0.59% 56.2kB / 90.4kB 0B / 0B 52 23:16:37 d6b8ebbe1c61 policy-pap 0.63% 527.2MiB / 31.41GiB 1.64% 2.33MB / 770kB 0B / 153MB 69 23:16:37 46da5a40c52c kafka 0.94% 384.7MiB / 31.41GiB 1.20% 236kB / 213kB 0B / 606kB 85 23:16:37 967b9e6da63c policy-api 0.10% 470.3MiB / 31.41GiB 1.46% 2.49MB / 1.26MB 0B / 0B 57 23:16:37 f842315ee7bf mariadb 0.02% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11.1MB / 48.1MB 28 23:16:37 de437c19f39c prometheus 0.00% 23.68MiB / 31.41GiB 0.07% 138kB / 9.98kB 0B / 0B 14 23:16:37 da0f88fc9932 compose_zookeeper_1 0.06% 105.1MiB / 31.41GiB 0.33% 58.5kB / 50kB 0B / 360kB 59 23:16:37 36eed7c77d1d simulator 0.10% 122.2MiB / 31.41GiB 0.38% 1.94kB / 0B 4.1kB / 0B 78 23:16:37 + echo 23:16:37 23:16:37 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:37 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:37 + relax_set 23:16:37 + set +e 23:16:37 + set +o pipefail 23:16:37 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:37 ++ echo 'Shut down started!' 23:16:37 Shut down started! 23:16:37 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:37 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:37 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:37 ++ source export-ports.sh 23:16:37 ++ source get-versions.sh 23:16:39 ++ echo 'Collecting logs from docker compose containers...' 23:16:39 Collecting logs from docker compose containers... 23:16:39 ++ docker-compose logs 23:16:41 ++ cat docker_compose.log 23:16:41 Attaching to grafana, policy-apex-pdp, policy-pap, kafka, policy-api, policy-db-migrator, mariadb, prometheus, compose_zookeeper_1, simulator 23:16:41 zookeeper_1 | ===> User 23:16:41 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:41 zookeeper_1 | ===> Configuring ... 23:16:41 zookeeper_1 | ===> Running preflight checks ... 23:16:41 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:41 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:41 zookeeper_1 | ===> Launching ... 23:16:41 zookeeper_1 | ===> Launching zookeeper ... 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,389] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,395] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,395] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,395] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,395] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,396] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,397] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,397] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,397] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,398] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,398] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,398] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,398] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,398] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,398] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,399] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,409] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,412] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,412] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,414] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.989809723Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-19T23:14:14Z 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990007373Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990017193Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990020663Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990023853Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990027173Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990030263Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990033033Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990035913Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990039533Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990044853Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990047693Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990051414Z level=info msg=Target target=[all] 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990056354Z level=info msg="Path Home" path=/usr/share/grafana 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990059644Z level=info msg="Path Data" path=/var/lib/grafana 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990070264Z level=info msg="Path Logs" path=/var/log/grafana 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990072984Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990079084Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:41 grafana | logger=settings t=2024-02-19T23:14:14.990082404Z level=info msg="App mode production" 23:16:41 grafana | logger=sqlstore t=2024-02-19T23:14:14.990409426Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:41 grafana | logger=sqlstore t=2024-02-19T23:14:14.990427306Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:14.991159499Z level=info msg="Starting DB migrations" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:14.992069422Z level=info msg="Executing migration" id="create migration_log table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:14.992976936Z level=info msg="Migration successfully executed" id="create migration_log table" duration=906.774µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:14.996740133Z level=info msg="Executing migration" id="create user table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:14.997227425Z level=info msg="Migration successfully executed" id="create user table" duration=487.152µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.00091162Z level=info msg="Executing migration" id="add unique index user.login" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.001634794Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=722.834µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.007317132Z level=info msg="Executing migration" id="add unique index user.email" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.008032621Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=715.349µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.011224017Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.011922673Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=698.106µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.017816193Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.018776102Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=959.809µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.026398905Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.028646935Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.25021ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.034496324Z level=info msg="Executing migration" id="create user table v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.035001798Z level=info msg="Migration successfully executed" id="create user table v2" duration=503.144µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.038285627Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.038817341Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=531.544µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.04582193Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.046371995Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=550.005µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.051132585Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.051413237Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=280.742µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.057073895Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.058019454Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=944.929µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.064340777Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.065536707Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.19297ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.070172556Z level=info msg="Executing migration" id="Update user table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.070294547Z level=info msg="Migration successfully executed" id="Update user table charset" duration=122.921µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.07294488Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.07418102Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.23585ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.077885992Z level=info msg="Executing migration" id="Add missing user data" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.078243085Z level=info msg="Migration successfully executed" id="Add missing user data" duration=357.133µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.084106674Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.085389885Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.2776ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.089133557Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.090046424Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=916.587µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.093337042Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.094605413Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.268261ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.101709413Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.115254888Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.546165ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.120066089Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.120913225Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=847.196µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.124847269Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.125741347Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=894.418µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.131488925Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.132727195Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.23804ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.137693387Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.139009259Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.320302ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.143784279Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.14509411Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.309931ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.148528539Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.14862477Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=101.631µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.153765364Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.15452274Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=757.276µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.157576066Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.158662905Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.086719ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.162119735Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.163227583Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.107649ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.171851016Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.172914746Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.06237ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.179053558Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.185098449Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=6.045671ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.189620697Z level=info msg="Executing migration" id="create temp_user v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.190162552Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=545.225µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.193384309Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.194033154Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=649.235µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.199730802Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.200461559Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=732.837µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.203663466Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.204313452Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=650.736µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.207423308Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.208072453Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=651.195µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.215779158Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.216184392Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=407.124µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.221065543Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.221525457Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=460.804µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.224833975Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.225112427Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=278.232µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.228223913Z level=info msg="Executing migration" id="create star table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.228671068Z level=info msg="Migration successfully executed" id="create star table" duration=449.305µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.236392543Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.237097249Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=700.036µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.241411745Z level=info msg="Executing migration" id="create org table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.24197432Z level=info msg="Migration successfully executed" id="create org table v1" duration=565.795µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.247103603Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.247728858Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=625.875µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.253830691Z level=info msg="Executing migration" id="create org_user table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.254341335Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=511.164µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.259977203Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.260512227Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=530.044µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.264728002Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.265264638Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=537.306µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.268145822Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.268784407Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=638.255µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.27500908Z level=info msg="Executing migration" id="Update org table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.2750316Z level=info msg="Migration successfully executed" id="Update org table charset" duration=23.32µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.279367367Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.279388907Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=22.39µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.282400743Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.282519453Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=118.69µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.284613771Z level=info msg="Executing migration" id="create dashboard table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.285086535Z level=info msg="Migration successfully executed" id="create dashboard table" duration=472.314µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.294163342Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.294711656Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=553.224µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.300038821Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.300668417Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=629.996µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.303719592Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.304209167Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=484.405µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.307189592Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.307762927Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=574.665µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.313087942Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.313647967Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=559.995µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.316710373Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.321276332Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.565599ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.324560329Z level=info msg="Executing migration" id="create dashboard v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.325027733Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=467.284µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.330224647Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.330728451Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=503.684µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.337216456Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.338016393Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=802.597µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.343829552Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.344109245Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=279.153µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.347421013Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.348110948Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=689.645µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.35181712Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.352034502Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=217.812µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.357065214Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.359878738Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.813104ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.37186418Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.374933126Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.068156ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.38145846Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.383345537Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.886077ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.38726933Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.388180528Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=910.908µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.392732796Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.394641002Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.911176ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.398749957Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.400997586Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=2.247839ms 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,423] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:host.name=da0f88fc9932 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.404480565Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.405047681Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=562.546µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.410257454Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.410276574Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=19.29µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.414925044Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.414953614Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=29.25µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.421090566Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.422437778Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.347182ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.430078113Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.43330388Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.231377ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.43696684Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.43928791Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.32057ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.442633518Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.444746717Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.112669ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.449932571Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.450114222Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=182.611µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.454823321Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.455630559Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=806.848µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.461194036Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.462647698Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.453202ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.466810833Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.466854844Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=45.071µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.472127028Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.473003916Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=876.638µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.476121382Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.476861268Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=739.166µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.481191545Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.488487086Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.295771ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.49717555Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.49840323Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.226651ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.504451572Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.505363869Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=906.428µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.508806069Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.509828627Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.022188ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.515101962Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.515487795Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=383.063µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.519406908Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.520517828Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.11273ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.52552965Z level=info msg="Executing migration" id="Add check_sum column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.528849448Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.329778ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.53494824Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.535852187Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=909.767µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.541083851Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.541327063Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=241.762µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.546181305Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.546523478Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=341.823µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.550305949Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.551663601Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.356672ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.555326082Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.558753211Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.428819ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.564282127Z level=info msg="Executing migration" id="create data_source table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.565208636Z level=info msg="Migration successfully executed" id="create data_source table" duration=920.009µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.569026358Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.570250829Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.224181ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.574296243Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.575600493Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.30395ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.581920037Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.582724254Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=807.017µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.587209572Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.588553963Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.343681ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.592461946Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.601941157Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.479581ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.606365084Z level=info msg="Executing migration" id="create data_source table v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.607261322Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=895.938µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.611114294Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.612036692Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=919.278µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.619222313Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.62010371Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=881.157µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.625873839Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,424] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,425] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,426] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,427] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,427] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,428] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,428] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,428] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,428] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,428] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,428] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,428] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,428] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,430] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,430] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,431] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,431] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,431] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,450] INFO Logging initialized @460ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,526] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,526] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,543] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,578] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,578] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,579] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,581] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,589] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,600] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,600] INFO Started @611ms (org.eclipse.jetty.server.Server) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,600] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,604] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,605] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,606] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,608] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,623] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,623] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,624] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,624] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,628] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,628] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,630] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,631] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,631] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,638] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,640] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,650] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:41 zookeeper_1 | [2024-02-19 23:14:13,650] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:41 zookeeper_1 | [2024-02-19 23:14:15,040] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.626342004Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=467.925µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.629759412Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.631624917Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.865085ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.636199236Z level=info msg="Executing migration" id="Add secure json data column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.639870297Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.670581ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.644313995Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.644334395Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=20.67µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.647384071Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.647536702Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=152.521µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.650607468Z level=info msg="Executing migration" id="Add read_only data column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.654337431Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.726342ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.663178335Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.663446267Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=265.812µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.667489121Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.667737663Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=248.602µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.671284514Z level=info msg="Executing migration" id="Add uid column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.673644673Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.359729ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.677740338Z level=info msg="Executing migration" id="Update uid value" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.678040281Z level=info msg="Migration successfully executed" id="Update uid value" duration=302.813µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.682995573Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.68387843Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=882.587µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.68740967Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.688921183Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.488613ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.692457113Z level=info msg="Executing migration" id="create api_key table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.693747664Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.289942ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.700256779Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.701216977Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=959.997µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.705486833Z level=info msg="Executing migration" id="add index api_key.key" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.706804894Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.313501ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.710612127Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.711962658Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.351051ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.719004668Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.719944705Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=940.417µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.723199033Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.72407334Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=874.257µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.72874059Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:41 kafka | ===> User 23:16:41 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:41 kafka | ===> Configuring ... 23:16:41 kafka | Running in Zookeeper mode... 23:16:41 kafka | ===> Running preflight checks ... 23:16:41 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:41 kafka | ===> Check if Zookeeper is healthy ... 23:16:41 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:41 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:41 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:41 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:41 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:host.name=46da5a40c52c (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.729625727Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=891.667µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.736218543Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.745090828Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.871775ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.748733579Z level=info msg="Executing migration" id="create api_key table v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.767311546Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=18.577587ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.779190227Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.780634119Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.444322ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.785989875Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.787343606Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.353611ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.791523232Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.792372008Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=848.656µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.800179905Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.800693859Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=516.074µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.806642909Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.807571837Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=938.448µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.811366199Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.81140233Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=37.171µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.816319901Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.820482256Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.165045ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.824759192Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.827320504Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.555552ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.831718611Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.831932843Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=215.472µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.835603964Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.838109505Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.505191ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.847268292Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.849822104Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.553152ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.853703058Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.854438764Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=738.046µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.858332766Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.859024593Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=691.677µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.86820608Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.869766803Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.560133ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.873604956Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.874437463Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=832.447µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.881279121Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.881976957Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=698.476µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.887281232Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.887885667Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=604.405µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.893384663Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.893433124Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=48.711µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.897423097Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.897446367Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=21.14µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.900409322Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.902343869Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.934427ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.906284923Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.908200548Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.915335ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.912328954Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.912374554Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=45.74µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.9154219Z level=info msg="Executing migration" id="create quota table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.915886774Z level=info msg="Migration successfully executed" id="create quota table v1" duration=467.284µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.919981378Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,966] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,967] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,972] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,972] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,972] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,972] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,975] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:14,979] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:41 kafka | [2024-02-19 23:14:14,984] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:41 kafka | [2024-02-19 23:14:14,992] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:15,013] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:15,014] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:15,024] INFO Socket connection established, initiating session, client: /172.17.0.8:50360, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:15,054] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003a15c0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:15,174] INFO Session: 0x1000003a15c0000 closed (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:15,174] INFO EventThread shut down for session: 0x1000003a15c0000 (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:41 kafka | ===> Launching ... 23:16:41 kafka | ===> Launching kafka ... 23:16:41 kafka | [2024-02-19 23:14:15,857] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:41 kafka | [2024-02-19 23:14:16,210] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:41 kafka | [2024-02-19 23:14:16,280] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:41 kafka | [2024-02-19 23:14:16,281] INFO starting (kafka.server.KafkaServer) 23:16:41 kafka | [2024-02-19 23:14:16,282] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:41 kafka | [2024-02-19 23:14:16,297] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:host.name=46da5a40c52c (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:41 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:41 policy-apex-pdp | mariadb (172.17.0.2:3306) open 23:16:41 policy-apex-pdp | Waiting for kafka port 9092... 23:16:41 policy-apex-pdp | kafka (172.17.0.8:9092) open 23:16:41 policy-apex-pdp | Waiting for pap port 6969... 23:16:41 policy-apex-pdp | pap (172.17.0.9:6969) open 23:16:41 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:41 policy-apex-pdp | [2024-02-19T23:14:45.773+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:41 policy-apex-pdp | [2024-02-19T23:14:45.971+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:41 policy-apex-pdp | allow.auto.create.topics = true 23:16:41 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:41 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:41 policy-apex-pdp | auto.offset.reset = latest 23:16:41 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:41 policy-apex-pdp | check.crcs = true 23:16:41 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:41 policy-apex-pdp | client.id = consumer-8a152ea0-3554-4e34-a917-801a2773d54e-1 23:16:41 policy-apex-pdp | client.rack = 23:16:41 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:41 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:41 policy-apex-pdp | enable.auto.commit = true 23:16:41 policy-apex-pdp | exclude.internal.topics = true 23:16:41 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:41 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:41 policy-apex-pdp | fetch.min.bytes = 1 23:16:41 policy-apex-pdp | group.id = 8a152ea0-3554-4e34-a917-801a2773d54e 23:16:41 policy-apex-pdp | group.instance.id = null 23:16:41 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:41 policy-apex-pdp | interceptor.classes = [] 23:16:41 policy-apex-pdp | internal.leave.group.on.close = true 23:16:41 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:41 policy-apex-pdp | isolation.level = read_uncommitted 23:16:41 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:41 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:41 policy-apex-pdp | max.poll.records = 500 23:16:41 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:41 policy-apex-pdp | metric.reporters = [] 23:16:41 policy-apex-pdp | metrics.num.samples = 2 23:16:41 policy-apex-pdp | metrics.recording.level = INFO 23:16:41 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:41 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:41 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:41 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:41 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.920540173Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=558.255µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.925802237Z level=info msg="Executing migration" id="Update quota table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.925820757Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=19.13µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.930703249Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.931201673Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=498.434µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.934423101Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.935003986Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=577.535µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.937964781Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.939944928Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=1.979177ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.943879271Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.943899771Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=20.9µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.946849836Z level=info msg="Executing migration" id="create session table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.94738619Z level=info msg="Migration successfully executed" id="create session table" duration=536.084µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.950669588Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.950730098Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=60.49µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.955895002Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.955950002Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=57.7µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.962334667Z level=info msg="Executing migration" id="create playlist table v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.962851751Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=520.384µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.966767114Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.967243028Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=488.054µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.970552156Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.970569646Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=18.09µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.974642241Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.974664501Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=25.4µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.977847058Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.979923166Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.075648ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.982924051Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.985018729Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.095568ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.989215324Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.989271095Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=55.621µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.992156929Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.992208949Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=52.38µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.998493892Z level=info msg="Executing migration" id="create preferences table v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:15.998983657Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=492.955µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.005198942Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.005218553Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=18.391µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.009974574Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.012105093Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.130339ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.015202198Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.015309108Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=106.75µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.019308857Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.021402307Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.09314ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.025612726Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.027730346Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.11572ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.030890921Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.030943001Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=52.5µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.034124276Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.034969789Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=845.103µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.041526211Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.042741716Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.215255ms 23:16:41 mariadb | 2024-02-19 23:14:06+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:41 mariadb | 2024-02-19 23:14:07+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:41 mariadb | 2024-02-19 23:14:07+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:41 mariadb | 2024-02-19 23:14:07+00:00 [Note] [Entrypoint]: Initializing database files 23:16:41 mariadb | 2024-02-19 23:14:07 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:41 mariadb | 2024-02-19 23:14:07 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:41 mariadb | 2024-02-19 23:14:07 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:41 mariadb | 23:16:41 mariadb | 23:16:41 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:41 mariadb | To do so, start the server, then issue the following command: 23:16:41 mariadb | 23:16:41 mariadb | '/usr/bin/mysql_secure_installation' 23:16:41 mariadb | 23:16:41 mariadb | which will also give you the option of removing the test 23:16:41 mariadb | databases and anonymous user created by default. This is 23:16:41 mariadb | strongly recommended for production servers. 23:16:41 mariadb | 23:16:41 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:41 mariadb | 23:16:41 mariadb | Please report any problems at https://mariadb.org/jira 23:16:41 mariadb | 23:16:41 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:41 mariadb | 23:16:41 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:41 mariadb | https://mariadb.org/get-involved/ 23:16:41 mariadb | 23:16:41 mariadb | 2024-02-19 23:14:08+00:00 [Note] [Entrypoint]: Database files initialized 23:16:41 mariadb | 2024-02-19 23:14:08+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:41 policy-api | Waiting for mariadb port 3306... 23:16:41 policy-api | mariadb (172.17.0.2:3306) open 23:16:41 policy-api | Waiting for policy-db-migrator port 6824... 23:16:41 policy-api | policy-db-migrator (172.17.0.6:6824) open 23:16:41 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:41 policy-api | 23:16:41 policy-api | . ____ _ __ _ _ 23:16:41 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:41 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:41 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:41 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:41 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:41 policy-api | :: Spring Boot :: (v3.1.8) 23:16:41 policy-api | 23:16:41 policy-api | [2024-02-19T23:14:22.188+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:41 policy-api | [2024-02-19T23:14:22.190+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:41 policy-api | [2024-02-19T23:14:23.884+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:41 policy-api | [2024-02-19T23:14:23.977+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 83 ms. Found 6 JPA repository interfaces. 23:16:41 policy-api | [2024-02-19T23:14:24.368+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:41 policy-api | [2024-02-19T23:14:24.368+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:41 policy-api | [2024-02-19T23:14:25.036+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:41 policy-api | [2024-02-19T23:14:25.044+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:41 policy-api | [2024-02-19T23:14:25.046+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:41 policy-api | [2024-02-19T23:14:25.046+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:41 policy-api | [2024-02-19T23:14:25.125+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:41 policy-api | [2024-02-19T23:14:25.125+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2871 ms 23:16:41 policy-api | [2024-02-19T23:14:25.514+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:41 policy-api | [2024-02-19T23:14:25.593+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:41 policy-api | [2024-02-19T23:14:25.596+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:41 policy-api | [2024-02-19T23:14:25.639+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:41 policy-api | [2024-02-19T23:14:25.987+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:41 policy-api | [2024-02-19T23:14:26.006+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:41 policy-api | [2024-02-19T23:14:26.105+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7636823f 23:16:41 policy-api | [2024-02-19T23:14:26.107+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:41 policy-api | [2024-02-19T23:14:26.136+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 23:16:41 policy-api | [2024-02-19T23:14:26.137+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 23:16:41 policy-api | [2024-02-19T23:14:27.920+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:41 policy-api | [2024-02-19T23:14:27.923+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:41 policy-api | [2024-02-19T23:14:28.872+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:41 policy-api | [2024-02-19T23:14:29.672+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:41 policy-api | [2024-02-19T23:14:30.865+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:41 policy-api | [2024-02-19T23:14:31.072+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2f84848e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@607c7f58, org.springframework.security.web.context.SecurityContextHolderFilter@7b3d759f, org.springframework.security.web.header.HeaderWriterFilter@15200332, org.springframework.security.web.authentication.logout.LogoutFilter@25e7e6d, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4c66b3d9, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@62c4ad40, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@9bc10bd, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4bbb00a4, org.springframework.security.web.access.ExceptionTranslationFilter@4529b266, org.springframework.security.web.access.intercept.AuthorizationFilter@3413effc] 23:16:41 policy-api | [2024-02-19T23:14:31.863+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:41 policy-api | [2024-02-19T23:14:31.974+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:41 policy-api | [2024-02-19T23:14:32.003+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:41 policy-api | [2024-02-19T23:14:32.019+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.58 seconds (process running for 11.176) 23:16:41 policy-api | [2024-02-19T23:14:39.920+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:41 policy-api | [2024-02-19T23:14:39.921+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:41 policy-api | [2024-02-19T23:14:39.922+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:16:41 policy-api | [2024-02-19T23:14:48.531+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 23:16:41 policy-api | [] 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.048262702Z level=info msg="Executing migration" id="create alert table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.049261297Z level=info msg="Migration successfully executed" id="create alert table v1" duration=991.585µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.052688573Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:41 mariadb | 2024-02-19 23:14:08+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Note] InnoDB: Number of transaction pools: 1 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:41 mariadb | 2024-02-19 23:14:08 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Note] InnoDB: 128 rollback segments are active. 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:41 mariadb | 2024-02-19 23:14:09 0 [Note] mariadbd: ready for connections. 23:16:41 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:41 mariadb | 2024-02-19 23:14:09+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:41 mariadb | 2024-02-19 23:14:11+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:41 mariadb | 2024-02-19 23:14:11+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:41 mariadb | 23:16:41 mariadb | 2024-02-19 23:14:11+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:41 mariadb | 23:16:41 mariadb | 2024-02-19 23:14:11+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:41 mariadb | #!/bin/bash -xv 23:16:41 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:41 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:41 mariadb | # 23:16:41 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:41 mariadb | # you may not use this file except in compliance with the License. 23:16:41 mariadb | # You may obtain a copy of the License at 23:16:41 mariadb | # 23:16:41 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:41 mariadb | # 23:16:41 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:41 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:41 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:41 mariadb | # See the License for the specific language governing permissions and 23:16:41 mariadb | # limitations under the License. 23:16:41 mariadb | 23:16:41 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:41 mariadb | do 23:16:41 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:41 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:41 mariadb | done 23:16:41 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:41 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:41 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:41 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:41 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:41 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:41 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:41 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:41 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:41 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:41 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:41 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:41 policy-apex-pdp | request.timeout.ms = 30000 23:16:41 policy-apex-pdp | retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:41 policy-apex-pdp | sasl.jaas.config = null 23:16:41 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:41 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:41 policy-apex-pdp | sasl.login.class = null 23:16:41 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:41 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:41 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:41 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:41 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:41 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:41 policy-apex-pdp | security.providers = null 23:16:41 policy-apex-pdp | send.buffer.bytes = 131072 23:16:41 policy-apex-pdp | session.timeout.ms = 45000 23:16:41 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:41 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:41 policy-apex-pdp | ssl.cipher.suites = null 23:16:41 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:41 policy-apex-pdp | ssl.engine.factory.class = null 23:16:41 policy-apex-pdp | ssl.key.password = null 23:16:41 policy-db-migrator | Waiting for mariadb port 3306... 23:16:41 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:41 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:41 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:41 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:41 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:41 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:41 policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! 23:16:41 policy-db-migrator | 321 blocks 23:16:41 policy-db-migrator | Preparing upgrade release version: 0800 23:16:41 policy-db-migrator | Preparing upgrade release version: 0900 23:16:41 policy-db-migrator | Preparing upgrade release version: 1000 23:16:41 policy-db-migrator | Preparing upgrade release version: 1100 23:16:41 policy-db-migrator | Preparing upgrade release version: 1200 23:16:41 policy-db-migrator | Preparing upgrade release version: 1300 23:16:41 policy-db-migrator | Done 23:16:41 policy-db-migrator | name version 23:16:41 policy-db-migrator | policyadmin 0 23:16:41 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:41 policy-db-migrator | upgrade: 0 -> 1300 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:41 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:41 policy-apex-pdp | ssl.keystore.key = null 23:16:41 policy-apex-pdp | ssl.keystore.location = null 23:16:41 policy-apex-pdp | ssl.keystore.password = null 23:16:41 policy-apex-pdp | ssl.keystore.type = JKS 23:16:41 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:41 policy-apex-pdp | ssl.provider = null 23:16:41 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:41 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-apex-pdp | ssl.truststore.certificates = null 23:16:41 policy-apex-pdp | ssl.truststore.location = null 23:16:41 policy-apex-pdp | ssl.truststore.password = null 23:16:41 policy-apex-pdp | ssl.truststore.type = JKS 23:16:41 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-apex-pdp | 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.126+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.127+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.127+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384486125 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.129+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-1, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Subscribed to topic(s): policy-pdp-pap 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.141+00:00|INFO|ServiceManager|main] service manager starting 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.141+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.145+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8a152ea0-3554-4e34-a917-801a2773d54e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.164+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:41 policy-apex-pdp | allow.auto.create.topics = true 23:16:41 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:41 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:41 policy-apex-pdp | auto.offset.reset = latest 23:16:41 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:41 policy-apex-pdp | check.crcs = true 23:16:41 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:41 policy-apex-pdp | client.id = consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2 23:16:41 policy-apex-pdp | client.rack = 23:16:41 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:41 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:41 policy-apex-pdp | enable.auto.commit = true 23:16:41 policy-apex-pdp | exclude.internal.topics = true 23:16:41 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:41 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:41 policy-apex-pdp | fetch.min.bytes = 1 23:16:41 policy-apex-pdp | group.id = 8a152ea0-3554-4e34-a917-801a2773d54e 23:16:41 policy-apex-pdp | group.instance.id = null 23:16:41 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:41 policy-apex-pdp | interceptor.classes = [] 23:16:41 policy-apex-pdp | internal.leave.group.on.close = true 23:16:41 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:41 policy-apex-pdp | isolation.level = read_uncommitted 23:16:41 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:41 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:41 policy-apex-pdp | max.poll.records = 500 23:16:41 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:41 policy-apex-pdp | metric.reporters = [] 23:16:41 policy-apex-pdp | metrics.num.samples = 2 23:16:41 policy-apex-pdp | metrics.recording.level = INFO 23:16:41 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:41 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:41 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:41 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:41 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:41 policy-apex-pdp | request.timeout.ms = 30000 23:16:41 policy-apex-pdp | retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:41 policy-apex-pdp | sasl.jaas.config = null 23:16:41 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:41 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:41 policy-apex-pdp | sasl.login.class = null 23:16:41 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:41 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:41 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:41 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:41 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:41 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:41 policy-apex-pdp | security.providers = null 23:16:41 policy-apex-pdp | send.buffer.bytes = 131072 23:16:41 policy-apex-pdp | session.timeout.ms = 45000 23:16:41 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:41 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:41 policy-apex-pdp | ssl.cipher.suites = null 23:16:41 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:41 policy-apex-pdp | ssl.engine.factory.class = null 23:16:41 policy-apex-pdp | ssl.key.password = null 23:16:41 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:41 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:41 policy-apex-pdp | ssl.keystore.key = null 23:16:41 policy-apex-pdp | ssl.keystore.location = null 23:16:41 policy-apex-pdp | ssl.keystore.password = null 23:16:41 policy-apex-pdp | ssl.keystore.type = JKS 23:16:41 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:41 policy-apex-pdp | ssl.provider = null 23:16:41 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:41 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-apex-pdp | ssl.truststore.certificates = null 23:16:41 policy-apex-pdp | ssl.truststore.location = null 23:16:41 policy-apex-pdp | ssl.truststore.password = null 23:16:41 policy-apex-pdp | ssl.truststore.type = JKS 23:16:41 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-apex-pdp | 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.172+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.172+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.172+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384486172 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.173+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Subscribed to topic(s): policy-pdp-pap 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.173+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=74ade798-13a5-4ff0-98ae-519c6680c266, alive=false, publisher=null]]: starting 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.186+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:41 policy-apex-pdp | acks = -1 23:16:41 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:41 policy-apex-pdp | batch.size = 16384 23:16:41 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:41 policy-apex-pdp | buffer.memory = 33554432 23:16:41 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:41 policy-apex-pdp | client.id = producer-1 23:16:41 policy-apex-pdp | compression.type = none 23:16:41 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:41 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:41 policy-apex-pdp | enable.idempotence = true 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.053562236Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=873.123µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.057626576Z level=info msg="Executing migration" id="add index alert state" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.058401289Z level=info msg="Migration successfully executed" id="add index alert state" duration=774.263µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.061570344Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.062332647Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=762.033µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.065829943Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.066404376Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=574.383µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.070760846Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.071564791Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=808.745µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.074960056Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.075709139Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=748.073µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.079017005Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.093300851Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.283356ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.097664502Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.098132544Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=470.382µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.101888751Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.102495704Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=606.723µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.1060147Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.106445622Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=432.712µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.110932623Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.111761817Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=837.124µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.115506274Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.1165665Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.059806ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.120862459Z level=info msg="Executing migration" id="Add column is_default" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.124573286Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.711707ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.131233818Z level=info msg="Executing migration" id="Add column frequency" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.134737764Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.498336ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.137853268Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.141238874Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.385266ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.14469967Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.148369187Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.668977ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.152435715Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.15327184Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=835.935µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.156929786Z level=info msg="Executing migration" id="Update alert table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.156979286Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=50.15µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.160213161Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.160249141Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=36.7µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.189499264Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.190254257Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=756.803µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.19752572Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.198410725Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=884.885µs 23:16:41 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.202011961Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.202704114Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=692.053µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.20615029Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.206840133Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=689.723µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.210227278Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.211104372Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=876.634µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.216049974Z level=info msg="Executing migration" id="Add for to alert table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.21960128Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.547466ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.225346976Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.231386424Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=6.042208ms 23:16:41 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:41 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:41 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:41 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:41 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:41 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:41 mariadb | 23:16:41 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:41 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:41 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:41 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:41 mariadb | 23:16:41 mariadb | 2024-02-19 23:14:12+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Starting shutdown... 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Buffer pool(s) dump completed at 240219 23:14:12 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Shutdown completed; log sequence number 323699; transaction id 298 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] mariadbd: Shutdown complete 23:16:41 mariadb | 23:16:41 mariadb | 2024-02-19 23:14:12+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:41 mariadb | 23:16:41 mariadb | 2024-02-19 23:14:12+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:41 mariadb | 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Number of transaction pools: 1 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:41 mariadb | 2024-02-19 23:14:12 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] InnoDB: 128 rollback segments are active. 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] InnoDB: log sequence number 323699; transaction id 299 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] Server socket created on IP: '::'. 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] mariadbd: ready for connections. 23:16:41 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:41 mariadb | 2024-02-19 23:14:13 0 [Note] InnoDB: Buffer pool(s) load completed at 240219 23:14:13 23:16:41 mariadb | 2024-02-19 23:14:13 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:41 mariadb | 2024-02-19 23:14:13 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:16:41 mariadb | 2024-02-19 23:14:13 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:41 mariadb | 2024-02-19 23:14:13 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,301] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,303] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 23:16:41 kafka | [2024-02-19 23:14:16,308] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:41 kafka | [2024-02-19 23:14:16,314] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:16,316] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:41 kafka | [2024-02-19 23:14:16,325] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:16,334] INFO Socket connection established, initiating session, client: /172.17.0.8:50362, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:16,343] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000003a15c0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:41 kafka | [2024-02-19 23:14:16,347] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:41 kafka | [2024-02-19 23:14:16,653] INFO Cluster ID = afQCmge3SLiyxoKHB7mgXQ (kafka.server.KafkaServer) 23:16:41 kafka | [2024-02-19 23:14:16,658] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:41 kafka | [2024-02-19 23:14:16,706] INFO KafkaConfig values: 23:16:41 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:41 kafka | alter.config.policy.class.name = null 23:16:41 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:41 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:41 kafka | authorizer.class.name = 23:16:41 kafka | auto.create.topics.enable = true 23:16:41 kafka | auto.include.jmx.reporter = true 23:16:41 kafka | auto.leader.rebalance.enable = true 23:16:41 kafka | background.threads = 10 23:16:41 kafka | broker.heartbeat.interval.ms = 2000 23:16:41 kafka | broker.id = 1 23:16:41 kafka | broker.id.generation.enable = true 23:16:41 kafka | broker.rack = null 23:16:41 kafka | broker.session.timeout.ms = 9000 23:16:41 kafka | client.quota.callback.class = null 23:16:41 kafka | compression.type = producer 23:16:41 kafka | connection.failed.authentication.delay.ms = 100 23:16:41 kafka | connections.max.idle.ms = 600000 23:16:41 kafka | connections.max.reauth.ms = 0 23:16:41 kafka | control.plane.listener.name = null 23:16:41 kafka | controlled.shutdown.enable = true 23:16:41 kafka | controlled.shutdown.max.retries = 3 23:16:41 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:41 kafka | controller.listener.names = null 23:16:41 kafka | controller.quorum.append.linger.ms = 25 23:16:41 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:41 kafka | controller.quorum.election.timeout.ms = 1000 23:16:41 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:41 kafka | controller.quorum.request.timeout.ms = 2000 23:16:41 kafka | controller.quorum.retry.backoff.ms = 20 23:16:41 kafka | controller.quorum.voters = [] 23:16:41 kafka | controller.quota.window.num = 11 23:16:41 kafka | controller.quota.window.size.seconds = 1 23:16:41 kafka | controller.socket.timeout.ms = 30000 23:16:41 kafka | create.topic.policy.class.name = null 23:16:41 kafka | default.replication.factor = 1 23:16:41 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:41 kafka | delegation.token.expiry.time.ms = 86400000 23:16:41 kafka | delegation.token.master.key = null 23:16:41 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:41 kafka | delegation.token.secret.key = null 23:16:41 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:41 kafka | delete.topic.enable = true 23:16:41 kafka | early.start.listeners = null 23:16:41 kafka | fetch.max.bytes = 57671680 23:16:41 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:41 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:41 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:41 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:41 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:41 kafka | group.consumer.max.size = 2147483647 23:16:41 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:41 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:41 kafka | group.consumer.session.timeout.ms = 45000 23:16:41 kafka | group.coordinator.new.enable = false 23:16:41 kafka | group.coordinator.threads = 1 23:16:41 kafka | group.initial.rebalance.delay.ms = 3000 23:16:41 kafka | group.max.session.timeout.ms = 1800000 23:16:41 kafka | group.max.size = 2147483647 23:16:41 kafka | group.min.session.timeout.ms = 6000 23:16:41 kafka | initial.broker.registration.timeout.ms = 60000 23:16:41 kafka | inter.broker.listener.name = PLAINTEXT 23:16:41 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:41 kafka | kafka.metrics.polling.interval.secs = 10 23:16:41 kafka | kafka.metrics.reporters = [] 23:16:41 kafka | leader.imbalance.check.interval.seconds = 300 23:16:41 kafka | leader.imbalance.per.broker.percentage = 10 23:16:41 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:41 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:41 kafka | log.cleaner.backoff.ms = 15000 23:16:41 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:41 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:41 kafka | log.cleaner.enable = true 23:16:41 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:41 kafka | log.cleaner.io.buffer.size = 524288 23:16:41 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:41 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:41 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:41 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:41 kafka | log.cleaner.threads = 1 23:16:41 kafka | log.cleanup.policy = [delete] 23:16:41 kafka | log.dir = /tmp/kafka-logs 23:16:41 kafka | log.dirs = /var/lib/kafka/data 23:16:41 kafka | log.flush.interval.messages = 9223372036854775807 23:16:41 kafka | log.flush.interval.ms = null 23:16:41 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:41 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:41 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:41 kafka | log.index.interval.bytes = 4096 23:16:41 kafka | log.index.size.max.bytes = 10485760 23:16:41 kafka | log.local.retention.bytes = -2 23:16:41 kafka | log.local.retention.ms = -2 23:16:41 kafka | log.message.downconversion.enable = true 23:16:41 kafka | log.message.format.version = 3.0-IV1 23:16:41 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:41 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:41 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:41 kafka | log.message.timestamp.type = CreateTime 23:16:41 kafka | log.preallocate = false 23:16:41 kafka | log.retention.bytes = -1 23:16:41 kafka | log.retention.check.interval.ms = 300000 23:16:41 kafka | log.retention.hours = 168 23:16:41 kafka | log.retention.minutes = null 23:16:41 kafka | log.retention.ms = null 23:16:41 kafka | log.roll.hours = 168 23:16:41 kafka | log.roll.jitter.hours = 0 23:16:41 kafka | log.roll.jitter.ms = null 23:16:41 kafka | log.roll.ms = null 23:16:41 kafka | log.segment.bytes = 1073741824 23:16:41 policy-apex-pdp | interceptor.classes = [] 23:16:41 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:41 policy-apex-pdp | linger.ms = 0 23:16:41 policy-apex-pdp | max.block.ms = 60000 23:16:41 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:41 policy-apex-pdp | max.request.size = 1048576 23:16:41 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:41 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:41 policy-apex-pdp | metric.reporters = [] 23:16:41 policy-apex-pdp | metrics.num.samples = 2 23:16:41 policy-apex-pdp | metrics.recording.level = INFO 23:16:41 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:41 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:41 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:41 policy-apex-pdp | partitioner.class = null 23:16:41 policy-apex-pdp | partitioner.ignore.keys = false 23:16:41 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:41 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:41 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:41 policy-apex-pdp | request.timeout.ms = 30000 23:16:41 policy-apex-pdp | retries = 2147483647 23:16:41 policy-apex-pdp | retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:41 policy-apex-pdp | sasl.jaas.config = null 23:16:41 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:41 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:41 policy-apex-pdp | sasl.login.class = null 23:16:41 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:41 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:41 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:41 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:41 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:41 kafka | log.segment.delete.delay.ms = 60000 23:16:41 kafka | max.connection.creation.rate = 2147483647 23:16:41 kafka | max.connections = 2147483647 23:16:41 kafka | max.connections.per.ip = 2147483647 23:16:41 kafka | max.connections.per.ip.overrides = 23:16:41 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:41 kafka | message.max.bytes = 1048588 23:16:41 kafka | metadata.log.dir = null 23:16:41 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:41 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:41 kafka | metadata.log.segment.bytes = 1073741824 23:16:41 kafka | metadata.log.segment.min.bytes = 8388608 23:16:41 kafka | metadata.log.segment.ms = 604800000 23:16:41 kafka | metadata.max.idle.interval.ms = 500 23:16:41 kafka | metadata.max.retention.bytes = 104857600 23:16:41 kafka | metadata.max.retention.ms = 604800000 23:16:41 kafka | metric.reporters = [] 23:16:41 kafka | metrics.num.samples = 2 23:16:41 kafka | metrics.recording.level = INFO 23:16:41 kafka | metrics.sample.window.ms = 30000 23:16:41 kafka | min.insync.replicas = 1 23:16:41 kafka | node.id = 1 23:16:41 kafka | num.io.threads = 8 23:16:41 kafka | num.network.threads = 3 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.236062155Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.236291856Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=237.881µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.239672812Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.241011328Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.337346ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.246596233Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.247858609Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.262366ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.253867147Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.257637293Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.769686ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.264117192Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.264302453Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=179.521µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.271465597Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.272876723Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.411135ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.277947806Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.279392563Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.439437ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.284410055Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.284536086Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=126.981µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.288138272Z level=info msg="Executing migration" id="create annotation table v5" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.288921685Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=783.173µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.293433396Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.294512661Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.077865ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.298290728Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.299581384Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.290636ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.304373346Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.30526736Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=893.724µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.313521688Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.315024144Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.501576ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.320524659Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.321567724Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.048175ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.326291135Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.326321905Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=31.55µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.329131798Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.333245617Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.113859ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.337446535Z level=info msg="Executing migration" id="Drop category_id index" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.33829024Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=845.325µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.342428558Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.345511992Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.085734ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.351786092Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.352495225Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=698.953µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.358983954Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.360497301Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.513627ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.365847285Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.367157891Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.311796ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.370254535Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.386638209Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.383014ms 23:16:41 kafka | num.partitions = 1 23:16:41 kafka | num.recovery.threads.per.data.dir = 1 23:16:41 kafka | num.replica.alter.log.dirs.threads = null 23:16:41 kafka | num.replica.fetchers = 1 23:16:41 kafka | offset.metadata.max.bytes = 4096 23:16:41 kafka | offsets.commit.required.acks = -1 23:16:41 kafka | offsets.commit.timeout.ms = 5000 23:16:41 kafka | offsets.load.buffer.size = 5242880 23:16:41 kafka | offsets.retention.check.interval.ms = 600000 23:16:41 kafka | offsets.retention.minutes = 10080 23:16:41 kafka | offsets.topic.compression.codec = 0 23:16:41 kafka | offsets.topic.num.partitions = 50 23:16:41 kafka | offsets.topic.replication.factor = 1 23:16:41 kafka | offsets.topic.segment.bytes = 104857600 23:16:41 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:41 kafka | password.encoder.iterations = 4096 23:16:41 kafka | password.encoder.key.length = 128 23:16:41 kafka | password.encoder.keyfactory.algorithm = null 23:16:41 kafka | password.encoder.old.secret = null 23:16:41 kafka | password.encoder.secret = null 23:16:41 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:41 kafka | process.roles = [] 23:16:41 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:41 kafka | producer.id.expiration.ms = 86400000 23:16:41 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:41 kafka | queued.max.request.bytes = -1 23:16:41 kafka | queued.max.requests = 500 23:16:41 kafka | quota.window.num = 11 23:16:41 kafka | quota.window.size.seconds = 1 23:16:41 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:41 kafka | remote.log.manager.task.interval.ms = 30000 23:16:41 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:41 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:41 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:41 kafka | remote.log.manager.thread.pool.size = 10 23:16:41 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:41 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:41 kafka | remote.log.metadata.manager.class.path = null 23:16:41 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:41 kafka | remote.log.metadata.manager.listener.name = null 23:16:41 kafka | remote.log.reader.max.pending.tasks = 100 23:16:41 kafka | remote.log.reader.threads = 10 23:16:41 kafka | remote.log.storage.manager.class.name = null 23:16:41 kafka | remote.log.storage.manager.class.path = null 23:16:41 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:41 kafka | remote.log.storage.system.enable = false 23:16:41 kafka | replica.fetch.backoff.ms = 1000 23:16:41 kafka | replica.fetch.max.bytes = 1048576 23:16:41 kafka | replica.fetch.min.bytes = 1 23:16:41 kafka | replica.fetch.response.max.bytes = 10485760 23:16:41 kafka | replica.fetch.wait.max.ms = 500 23:16:41 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:41 kafka | replica.lag.time.max.ms = 30000 23:16:41 kafka | replica.selector.class = null 23:16:41 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:41 kafka | replica.socket.timeout.ms = 30000 23:16:41 kafka | replication.quota.window.num = 11 23:16:41 kafka | replication.quota.window.size.seconds = 1 23:16:41 kafka | request.timeout.ms = 30000 23:16:41 kafka | reserved.broker.max.id = 1000 23:16:41 kafka | sasl.client.callback.handler.class = null 23:16:41 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:41 kafka | sasl.jaas.config = null 23:16:41 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:41 kafka | sasl.kerberos.service.name = null 23:16:41 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.390918449Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.391408791Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=490.052µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.400341622Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.401785628Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.443836ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.412462336Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.413009479Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=546.743µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.416450485Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.417044117Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=596.722µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.423219636Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.423441227Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=221.451µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.426240189Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.429152492Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=2.910323ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.433307092Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.436336295Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.028893ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.440717855Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.44166469Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=942.705µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.447065374Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.448032038Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=988.294µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.452086286Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.452386147Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=299.491µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.458307925Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.462472454Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.163969ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.467599067Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.468487511Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=882.524µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.473042312Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.473209983Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=168.221µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.478568327Z level=info msg="Executing migration" id="Move region to single row" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.479007459Z level=info msg="Migration successfully executed" id="Move region to single row" duration=438.692µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.486335063Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.48811576Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.784268ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.495005692Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.495975066Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=969.394µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.499635912Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.500585117Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=948.865µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.505239188Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.506088902Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=849.554µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.510548422Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.511411637Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=863.455µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.515004783Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.515820706Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=815.633µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.520588828Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.520695908Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=107.88µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.525093948Z level=info msg="Executing migration" id="create test_data table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.526583925Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.489837ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.532596862Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.533302735Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=708.613µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.541014361Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.542313887Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.299276ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.546497945Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.548000053Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.501448ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.552338032Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.552733473Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=393.061µs 23:16:41 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:41 policy-apex-pdp | security.providers = null 23:16:41 policy-apex-pdp | send.buffer.bytes = 131072 23:16:41 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:41 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:41 policy-apex-pdp | ssl.cipher.suites = null 23:16:41 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:41 policy-apex-pdp | ssl.engine.factory.class = null 23:16:41 policy-apex-pdp | ssl.key.password = null 23:16:41 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:41 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:41 policy-apex-pdp | ssl.keystore.key = null 23:16:41 policy-apex-pdp | ssl.keystore.location = null 23:16:41 policy-apex-pdp | ssl.keystore.password = null 23:16:41 policy-apex-pdp | ssl.keystore.type = JKS 23:16:41 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:41 policy-apex-pdp | ssl.provider = null 23:16:41 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:41 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-apex-pdp | ssl.truststore.certificates = null 23:16:41 policy-apex-pdp | ssl.truststore.location = null 23:16:41 policy-apex-pdp | ssl.truststore.password = null 23:16:41 policy-apex-pdp | ssl.truststore.type = JKS 23:16:41 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:41 policy-apex-pdp | transactional.id = null 23:16:41 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:41 policy-apex-pdp | 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.195+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.213+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.213+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.213+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384486213 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.213+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=74ade798-13a5-4ff0-98ae-519c6680c266, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.213+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.214+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.216+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.216+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.218+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.218+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.218+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.218+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8a152ea0-3554-4e34-a917-801a2773d54e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.219+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8a152ea0-3554-4e34-a917-801a2773d54e, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.219+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.249+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:41 policy-apex-pdp | [] 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.261+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0772bae8-eb24-4250-9177-542918774eba","timestampMs":1708384486219,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.404+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.405+00:00|INFO|ServiceManager|main] service manager starting 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.405+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.405+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.415+00:00|INFO|ServiceManager|main] service manager started 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.415+00:00|INFO|ServiceManager|main] service manager started 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.415+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.415+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.603+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Cluster ID: afQCmge3SLiyxoKHB7mgXQ 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.603+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: afQCmge3SLiyxoKHB7mgXQ 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.605+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.616+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] (Re-)joining group 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.636+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Request joining group due to: need to re-join with the given member-id: consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2-ed48d9f0-0d89-419b-b201-03b48a20b29c 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.636+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:41 policy-apex-pdp | [2024-02-19T23:14:46.636+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] (Re-)joining group 23:16:41 policy-apex-pdp | [2024-02-19T23:14:47.104+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:41 policy-apex-pdp | [2024-02-19T23:14:47.105+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:41 policy-apex-pdp | [2024-02-19T23:14:49.642+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Successfully joined group with generation Generation{generationId=1, memberId='consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2-ed48d9f0-0d89-419b-b201-03b48a20b29c', protocol='range'} 23:16:41 policy-apex-pdp | [2024-02-19T23:14:49.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Finished assignment for group at generation 1: {consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2-ed48d9f0-0d89-419b-b201-03b48a20b29c=Assignment(partitions=[policy-pdp-pap-0])} 23:16:41 policy-apex-pdp | [2024-02-19T23:14:49.657+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Successfully synced group in generation Generation{generationId=1, memberId='consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2-ed48d9f0-0d89-419b-b201-03b48a20b29c', protocol='range'} 23:16:41 policy-apex-pdp | [2024-02-19T23:14:49.657+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:41 policy-apex-pdp | [2024-02-19T23:14:49.659+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Adding newly assigned partitions: policy-pdp-pap-0 23:16:41 policy-apex-pdp | [2024-02-19T23:14:49.667+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Found no committed offset for partition policy-pdp-pap-0 23:16:41 policy-apex-pdp | [2024-02-19T23:14:49.680+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2, groupId=8a152ea0-3554-4e34-a917-801a2773d54e] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:41 policy-apex-pdp | [2024-02-19T23:14:56.148+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.4 - policyadmin [19/Feb/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.49.1" 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.219+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5efb7248-7e09-4e3f-a3fc-dc46b4b44102","timestampMs":1708384506219,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.242+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5efb7248-7e09-4e3f-a3fc-dc46b4b44102","timestampMs":1708384506219,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.245+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.421+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"dbfed9da-433f-414e-99bd-a5afc818016c","timestampMs":1708384506334,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.431+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.431+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"87cf70eb-861c-4fe6-b963-0fa51b97d516","timestampMs":1708384506431,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.436+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"dbfed9da-433f-414e-99bd-a5afc818016c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"0acc3e0c-854c-4c5d-90cc-11816db4d7f6","timestampMs":1708384506436,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.444+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"87cf70eb-861c-4fe6-b963-0fa51b97d516","timestampMs":1708384506431,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.444+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:41 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.557883637Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.55851875Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=634.333µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.562353978Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.562417718Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=64.37µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.566064814Z level=info msg="Executing migration" id="create team table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.566710237Z level=info msg="Migration successfully executed" id="create team table" duration=643.803µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.573144777Z level=info msg="Executing migration" id="add index team.org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.574583993Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.438616ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.610013314Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.61136271Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.349116ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.61557648Z level=info msg="Executing migration" id="Add column uid in team" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.622529771Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.952281ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.627323883Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.627469613Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=145.46µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.633012008Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.633636151Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=623.983µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.637296147Z level=info msg="Executing migration" id="create team member table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.638328103Z level=info msg="Migration successfully executed" id="create team member table" duration=1.031526ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.64431387Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.645299394Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=985.584µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.651631523Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.652925189Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.293426ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.657248199Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.658170273Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=921.854µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.664052549Z level=info msg="Executing migration" id="Add column email to team table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.668789161Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.738892ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.673654362Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.678371794Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.716962ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.682072851Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.686842043Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.768622ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.693166501Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.694062536Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=897.175µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.69929927Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.700815336Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.515816ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.704952456Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.70601376Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.061024ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.711200904Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.713656895Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=2.453961ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.71926827Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.720148154Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=879.804µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.728694543Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.730954423Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=2.25914ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.737432473Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.738555868Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.123646ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.744330854Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.746202112Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.870948ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.750257351Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.750894934Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=641.933µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.758651219Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.759051251Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=399.912µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.766885547Z level=info msg="Executing migration" id="create tag table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.769254888Z level=info msg="Migration successfully executed" id="create tag table" duration=2.361021ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.773115075Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.77445383Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.339915ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.777765856Z level=info msg="Executing migration" id="create login attempt table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.778487569Z level=info msg="Migration successfully executed" id="create login attempt table" duration=718.543µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.784206705Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.785107299Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=900.614µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.788505754Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.789549179Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.042885ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.800379179Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.817531447Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.148668ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.821773685Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.822281547Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=506.712µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.824644398Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.825640183Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=995.675µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.828731887Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.82939518Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=663.523µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.833956441Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.835356377Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.391236ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.842135369Z level=info msg="Executing migration" id="create user auth table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.842951792Z level=info msg="Migration successfully executed" id="create user auth table" duration=816.173µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.845824935Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.84690073Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.075535ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.85140103Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.851495041Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=94.631µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.855944321Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.861854288Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.909517ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.865162203Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.873558351Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.375918ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.87770995Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.883220425Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.509815ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.886867312Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.892279975Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.412433ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.896354454Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.897354149Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=999.665µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.906723942Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.912537638Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.813146ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.915658672Z level=info msg="Executing migration" id="create server_lock table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.916485796Z level=info msg="Migration successfully executed" id="create server_lock table" duration=826.944µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.920466025Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.921467099Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.000754ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.925756988Z level=info msg="Executing migration" id="create user auth token table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.926634462Z level=info msg="Migration successfully executed" id="create user auth token table" duration=878.374µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.930010257Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.931097112Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.086545ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.936565437Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.938003294Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.436247ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.944441873Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.94608784Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.645697ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.949599366Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.955187822Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.587676ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.95930007Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.960340345Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.040265ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.96573868Z level=info msg="Executing migration" id="create cache_data table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.967050745Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.304435ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.970979864Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.972591251Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.611297ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.977029381Z level=info msg="Executing migration" id="create short_url table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.977819474Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=789.893µs 23:16:41 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:41 policy-db-migrator | JOIN pdpstatistics b 23:16:41 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:41 policy-db-migrator | SET a.id = b.id 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.984140673Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.985732361Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.591498ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.989194406Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.989333877Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=132.151µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.993075003Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:16.993181645Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=107.201µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.030858664Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.032114669Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.255905ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.03868922Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.040308007Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.623367ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.043870533Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.04547857Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.607397ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.048878825Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.048969356Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=88.251µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.053605517Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.054588981Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=983.534µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.057914547Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.059408403Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.494406ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.062787498Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.063906544Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.119986ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.066977737Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.067975032Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=996.925µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.073220745Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.078764921Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.543836ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.082505177Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.083438712Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=936.595µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.086630396Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.086737037Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=107.451µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.089983771Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.091311767Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.330206ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.094976854Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.096603351Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.625856ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.099982017Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.101086491Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.109324ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.104547337Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.104631367Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=84.07µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.112314192Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.113538327Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.225865ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.119082262Z level=info msg="Executing migration" id="create alert_instance table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.119992237Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=909.815µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.123636153Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.124611537Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=974.604µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.130111112Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.131167457Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.055365ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.134858604Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.450+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"dbfed9da-433f-414e-99bd-a5afc818016c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"0acc3e0c-854c-4c5d-90cc-11816db4d7f6","timestampMs":1708384506436,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.451+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.484+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35df8189-a6da-46ad-bacc-3ab4a6fe7616","timestampMs":1708384506335,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.489+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35df8189-a6da-46ad-bacc-3ab4a6fe7616","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9a7c0a83-3afa-4f88-b0a9-20224f1c26a9","timestampMs":1708384506488,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.499+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35df8189-a6da-46ad-bacc-3ab4a6fe7616","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9a7c0a83-3afa-4f88-b0a9-20224f1c26a9","timestampMs":1708384506488,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.503+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.511+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a1a672fa-2c48-48e1-814b-49846bec95c1","timestampMs":1708384506492,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.513+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a1a672fa-2c48-48e1-814b-49846bec95c1","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"08b36924-eee8-4089-b8a2-3790ae49474f","timestampMs":1708384506513,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.523+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a1a672fa-2c48-48e1-814b-49846bec95c1","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"08b36924-eee8-4089-b8a2-3790ae49474f","timestampMs":1708384506513,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-apex-pdp | [2024-02-19T23:15:06.524+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:41 policy-apex-pdp | [2024-02-19T23:15:56.076+00:00|INFO|RequestLog|qtp1068445309-29] 172.17.0.4 - policyadmin [19/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.49.1" 23:16:41 kafka | sasl.login.callback.handler.class = null 23:16:41 kafka | sasl.login.class = null 23:16:41 kafka | sasl.login.connect.timeout.ms = null 23:16:41 kafka | sasl.login.read.timeout.ms = null 23:16:41 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:41 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:41 kafka | sasl.login.refresh.window.factor = 0.8 23:16:41 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:41 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:41 kafka | sasl.login.retry.backoff.ms = 100 23:16:41 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:41 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:41 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 kafka | sasl.oauthbearer.expected.audience = null 23:16:41 kafka | sasl.oauthbearer.expected.issuer = null 23:16:41 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:41 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:41 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:41 kafka | sasl.server.callback.handler.class = null 23:16:41 kafka | sasl.server.max.receive.size = 524288 23:16:41 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:41 kafka | security.providers = null 23:16:41 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:41 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:41 kafka | socket.connection.setup.timeout.ms = 10000 23:16:41 kafka | socket.listen.backlog.size = 50 23:16:41 kafka | socket.receive.buffer.bytes = 102400 23:16:41 kafka | socket.request.max.bytes = 104857600 23:16:41 kafka | socket.send.buffer.bytes = 102400 23:16:41 kafka | ssl.cipher.suites = [] 23:16:41 kafka | ssl.client.auth = none 23:16:41 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 kafka | ssl.endpoint.identification.algorithm = https 23:16:41 kafka | ssl.engine.factory.class = null 23:16:41 kafka | ssl.key.password = null 23:16:41 kafka | ssl.keymanager.algorithm = SunX509 23:16:41 kafka | ssl.keystore.certificate.chain = null 23:16:41 kafka | ssl.keystore.key = null 23:16:41 kafka | ssl.keystore.location = null 23:16:41 kafka | ssl.keystore.password = null 23:16:41 kafka | ssl.keystore.type = JKS 23:16:41 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:41 kafka | ssl.protocol = TLSv1.3 23:16:41 kafka | ssl.provider = null 23:16:41 kafka | ssl.secure.random.implementation = null 23:16:41 kafka | ssl.trustmanager.algorithm = PKIX 23:16:41 kafka | ssl.truststore.certificates = null 23:16:41 kafka | ssl.truststore.location = null 23:16:41 kafka | ssl.truststore.password = null 23:16:41 kafka | ssl.truststore.type = JKS 23:16:41 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:41 kafka | transaction.max.timeout.ms = 900000 23:16:41 kafka | transaction.partition.verification.enable = true 23:16:41 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:41 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:41 kafka | transaction.state.log.min.isr = 2 23:16:41 kafka | transaction.state.log.num.partitions = 50 23:16:41 kafka | transaction.state.log.replication.factor = 3 23:16:41 kafka | transaction.state.log.segment.bytes = 104857600 23:16:41 kafka | transactional.id.expiration.ms = 604800000 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.141443433Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.599479ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.144718238Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:41 kafka | unclean.leader.election.enable = false 23:16:41 kafka | unstable.api.versions.enable = false 23:16:41 kafka | zookeeper.clientCnxnSocket = null 23:16:41 kafka | zookeeper.connect = zookeeper:2181 23:16:41 kafka | zookeeper.connection.timeout.ms = null 23:16:41 kafka | zookeeper.max.in.flight.requests = 10 23:16:41 kafka | zookeeper.metadata.migration.enable = false 23:16:41 kafka | zookeeper.session.timeout.ms = 18000 23:16:41 kafka | zookeeper.set.acl = false 23:16:41 kafka | zookeeper.ssl.cipher.suites = null 23:16:41 kafka | zookeeper.ssl.client.enable = false 23:16:41 kafka | zookeeper.ssl.crl.enable = false 23:16:41 kafka | zookeeper.ssl.enabled.protocols = null 23:16:41 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:41 kafka | zookeeper.ssl.keystore.location = null 23:16:41 kafka | zookeeper.ssl.keystore.password = null 23:16:41 kafka | zookeeper.ssl.keystore.type = null 23:16:41 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-pap | Waiting for mariadb port 3306... 23:16:41 policy-pap | mariadb (172.17.0.2:3306) open 23:16:41 policy-pap | Waiting for kafka port 9092... 23:16:41 policy-pap | kafka (172.17.0.8:9092) open 23:16:41 policy-pap | Waiting for api port 6969... 23:16:41 policy-pap | api (172.17.0.7:6969) open 23:16:41 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:41 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:41 policy-pap | 23:16:41 policy-pap | . ____ _ __ _ _ 23:16:41 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:41 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:41 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:41 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:41 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:41 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:41 policy-pap | 23:16:41 policy-pap | [2024-02-19T23:14:34.863+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:41 policy-pap | [2024-02-19T23:14:34.865+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:41 policy-pap | [2024-02-19T23:14:36.666+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:41 policy-pap | [2024-02-19T23:14:36.784+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 107 ms. Found 7 JPA repository interfaces. 23:16:41 policy-pap | [2024-02-19T23:14:37.170+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:41 policy-pap | [2024-02-19T23:14:37.171+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:41 policy-pap | [2024-02-19T23:14:37.839+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:41 policy-pap | [2024-02-19T23:14:37.850+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:41 policy-pap | [2024-02-19T23:14:37.852+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:41 policy-pap | [2024-02-19T23:14:37.852+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:41 policy-pap | [2024-02-19T23:14:37.945+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:41 policy-pap | [2024-02-19T23:14:37.946+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3000 ms 23:16:41 policy-pap | [2024-02-19T23:14:38.372+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:41 policy-pap | [2024-02-19T23:14:38.465+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:41 policy-pap | [2024-02-19T23:14:38.468+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:41 policy-pap | [2024-02-19T23:14:38.525+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:41 policy-pap | [2024-02-19T23:14:38.902+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:41 policy-pap | [2024-02-19T23:14:38.925+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:41 policy-pap | [2024-02-19T23:14:39.055+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@36a6bea6 23:16:41 policy-pap | [2024-02-19T23:14:39.058+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:41 policy-pap | [2024-02-19T23:14:39.089+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 23:16:41 policy-pap | [2024-02-19T23:14:39.091+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 23:16:41 policy-pap | [2024-02-19T23:14:41.136+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:41 policy-pap | [2024-02-19T23:14:41.140+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:41 policy-pap | [2024-02-19T23:14:41.687+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.145584252Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=861.724µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.153827549Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.155024064Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.233345ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.159794876Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.195225447Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=35.430851ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.19812061Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.229709292Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=31.587362ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.235924079Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.236633372Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=709.273µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.243073352Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.244705479Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.631347ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.249040549Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.255338807Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.292898ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.261649066Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.265594054Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.944648ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.268729248Z level=info msg="Executing migration" id="create alert_rule table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.269549151Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=819.113µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.274695125Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.275699299Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.004104ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.281922408Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.283154373Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.235125ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.289448051Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.291640591Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.19051ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.295474688Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.295580788Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=105.99µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.300133159Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.306306747Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.174178ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.311011779Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.316984766Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.972167ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.323013512Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.331469111Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=8.457139ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.337627728Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.338374762Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=747.024µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.342040729Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.343335174Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.288035ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.347363303Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.354725376Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.373463ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.360947934Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.367189242Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.240458ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.372860638Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.375174398Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.31881ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.381550497Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.387693064Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.142347ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.391346481Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.397341409Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.994427ms 23:16:41 policy-pap | [2024-02-19T23:14:42.174+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:41 policy-pap | [2024-02-19T23:14:42.286+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:41 policy-pap | [2024-02-19T23:14:42.562+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:41 policy-pap | allow.auto.create.topics = true 23:16:41 policy-pap | auto.commit.interval.ms = 5000 23:16:41 policy-pap | auto.include.jmx.reporter = true 23:16:41 policy-pap | auto.offset.reset = latest 23:16:41 policy-pap | bootstrap.servers = [kafka:9092] 23:16:41 policy-pap | check.crcs = true 23:16:41 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:41 policy-pap | client.id = consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-1 23:16:41 policy-pap | client.rack = 23:16:41 policy-pap | connections.max.idle.ms = 540000 23:16:41 policy-pap | default.api.timeout.ms = 60000 23:16:41 policy-pap | enable.auto.commit = true 23:16:41 policy-pap | exclude.internal.topics = true 23:16:41 policy-pap | fetch.max.bytes = 52428800 23:16:41 policy-pap | fetch.max.wait.ms = 500 23:16:41 policy-pap | fetch.min.bytes = 1 23:16:41 policy-pap | group.id = d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73 23:16:41 policy-pap | group.instance.id = null 23:16:41 policy-pap | heartbeat.interval.ms = 3000 23:16:41 policy-pap | interceptor.classes = [] 23:16:41 policy-pap | internal.leave.group.on.close = true 23:16:41 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:41 policy-pap | isolation.level = read_uncommitted 23:16:41 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-pap | max.partition.fetch.bytes = 1048576 23:16:41 policy-pap | max.poll.interval.ms = 300000 23:16:41 policy-pap | max.poll.records = 500 23:16:41 policy-pap | metadata.max.age.ms = 300000 23:16:41 policy-pap | metric.reporters = [] 23:16:41 policy-pap | metrics.num.samples = 2 23:16:41 policy-pap | metrics.recording.level = INFO 23:16:41 policy-pap | metrics.sample.window.ms = 30000 23:16:41 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:41 policy-pap | receive.buffer.bytes = 65536 23:16:41 policy-pap | reconnect.backoff.max.ms = 1000 23:16:41 policy-pap | reconnect.backoff.ms = 50 23:16:41 policy-pap | request.timeout.ms = 30000 23:16:41 policy-pap | retry.backoff.ms = 100 23:16:41 policy-pap | sasl.client.callback.handler.class = null 23:16:41 policy-pap | sasl.jaas.config = null 23:16:41 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-pap | sasl.kerberos.service.name = null 23:16:41 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-pap | sasl.login.callback.handler.class = null 23:16:41 policy-pap | sasl.login.class = null 23:16:41 policy-pap | sasl.login.connect.timeout.ms = null 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | msg 23:16:41 policy-db-migrator | upgrade to 1100 completed 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | TRUNCATE TABLE sequence 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP TABLE pdpstatistics 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | DROP TABLE statistics_sequence 23:16:41 policy-db-migrator | -------------- 23:16:41 policy-db-migrator | 23:16:41 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:41 policy-db-migrator | name version 23:16:41 policy-db-migrator | policyadmin 1300 23:16:41 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:41 prometheus | ts=2024-02-19T23:14:13.983Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:41 prometheus | ts=2024-02-19T23:14:13.983Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" 23:16:41 prometheus | ts=2024-02-19T23:14:13.983Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" 23:16:41 prometheus | ts=2024-02-19T23:14:13.983Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:41 prometheus | ts=2024-02-19T23:14:13.983Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:41 prometheus | ts=2024-02-19T23:14:13.983Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:41 prometheus | ts=2024-02-19T23:14:13.985Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:41 prometheus | ts=2024-02-19T23:14:13.985Z caller=main.go:1039 level=info msg="Starting TSDB ..." 23:16:41 prometheus | ts=2024-02-19T23:14:13.991Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 23:16:41 prometheus | ts=2024-02-19T23:14:13.991Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:41 prometheus | ts=2024-02-19T23:14:13.994Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:41 prometheus | ts=2024-02-19T23:14:13.994Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.51µs 23:16:41 prometheus | ts=2024-02-19T23:14:13.994Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:41 prometheus | ts=2024-02-19T23:14:13.995Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:41 prometheus | ts=2024-02-19T23:14:13.995Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=61.361µs wal_replay_duration=535.514µs wbl_replay_duration=380ns total_replay_duration=750.206µs 23:16:41 prometheus | ts=2024-02-19T23:14:13.998Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC 23:16:41 prometheus | ts=2024-02-19T23:14:13.998Z caller=main.go:1063 level=info msg="TSDB started" 23:16:41 prometheus | ts=2024-02-19T23:14:13.998Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:41 prometheus | ts=2024-02-19T23:14:13.998Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=836.328µs db_storage=1.77µs remote_storage=2.29µs web_handler=870ns query_engine=1.241µs scrape=194.481µs scrape_sd=90.631µs notify=25.33µs notify_sd=10.4µs rules=1.91µs tracing=5.16µs 23:16:41 prometheus | ts=2024-02-19T23:14:13.998Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." 23:16:41 prometheus | ts=2024-02-19T23:14:13.999Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.401475577Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.401587677Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=112.04µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.409542483Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:41 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:41 simulator | overriding logback.xml 23:16:41 simulator | 2024-02-19 23:14:06,310 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:41 simulator | 2024-02-19 23:14:06,368 INFO org.onap.policy.models.simulators starting 23:16:41 simulator | 2024-02-19 23:14:06,368 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:41 simulator | 2024-02-19 23:14:06,545 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:41 simulator | 2024-02-19 23:14:06,546 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:41 simulator | 2024-02-19 23:14:06,645 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:41 simulator | 2024-02-19 23:14:06,657 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 simulator | 2024-02-19 23:14:06,659 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 simulator | 2024-02-19 23:14:06,664 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:41 simulator | 2024-02-19 23:14:06,721 INFO Session workerName=node0 23:16:41 simulator | 2024-02-19 23:14:07,229 INFO Using GSON for REST calls 23:16:41 simulator | 2024-02-19 23:14:07,296 INFO Started o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE} 23:16:41 simulator | 2024-02-19 23:14:07,309 INFO Started A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:41 simulator | 2024-02-19 23:14:07,317 INFO Started Server@2a2c13a8{STARTING}[11.0.20,sto=0] @1497ms 23:16:41 simulator | 2024-02-19 23:14:07,318 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4342 ms. 23:16:41 simulator | 2024-02-19 23:14:07,322 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:41 simulator | 2024-02-19 23:14:07,326 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:41 simulator | 2024-02-19 23:14:07,326 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 simulator | 2024-02-19 23:14:07,327 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 simulator | 2024-02-19 23:14:07,328 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:41 simulator | 2024-02-19 23:14:07,335 INFO Session workerName=node0 23:16:41 simulator | 2024-02-19 23:14:07,396 INFO Using GSON for REST calls 23:16:41 simulator | 2024-02-19 23:14:07,406 INFO Started o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE} 23:16:41 simulator | 2024-02-19 23:14:07,407 INFO Started SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:41 simulator | 2024-02-19 23:14:07,407 INFO Started Server@62452cc9{STARTING}[11.0.20,sto=0] @1587ms 23:16:41 simulator | 2024-02-19 23:14:07,407 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4920 ms. 23:16:41 simulator | 2024-02-19 23:14:07,408 INFO org.onap.policy.models.simulators starting SO simulator 23:16:41 simulator | 2024-02-19 23:14:07,410 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:41 simulator | 2024-02-19 23:14:07,411 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 simulator | 2024-02-19 23:14:07,412 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 simulator | 2024-02-19 23:14:07,413 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:41 simulator | 2024-02-19 23:14:07,416 INFO Session workerName=node0 23:16:41 simulator | 2024-02-19 23:14:07,467 INFO Using GSON for REST calls 23:16:41 simulator | 2024-02-19 23:14:07,479 INFO Started o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE} 23:16:41 simulator | 2024-02-19 23:14:07,480 INFO Started SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.410730709Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.187906ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.448328768Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.450092307Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.762899ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.454313365Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.455485081Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.171476ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.460194382Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.460320982Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=126.43µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.464503542Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.475626032Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=11.11149ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.480015301Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.486718322Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.700571ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.492044906Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.498568015Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.522669ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.50189166Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.5085289Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.640899ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.51289904Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.519217648Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.318708ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.522679305Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.522770705Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=92.22µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.527261675Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.527788417Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=524.292µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.531875295Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.541259668Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.381703ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.547217774Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.547284645Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=67.131µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.550483879Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.561435409Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.95313ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.564839814Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.565599498Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=762.534µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.568807762Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.574508998Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=5.700646ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.57932971Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.580498325Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.168265ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.586926134Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.58825899Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.336566ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.592920181Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.599884012Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.962991ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.605712669Z level=info msg="Executing migration" id="create provenance_type table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.606240411Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=529.842µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.61036815Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.611947437Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.586017ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.617599102Z level=info msg="Executing migration" id="create alert_image table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.618285196Z level=info msg="Migration successfully executed" id="create alert_image table" duration=684.004µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.623473778Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.624289283Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=815.605µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.631200433Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.631380004Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=185.401µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.638501077Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:41 simulator | 2024-02-19 23:14:07,480 INFO Started Server@488eb7f2{STARTING}[11.0.20,sto=0] @1660ms 23:16:41 simulator | 2024-02-19 23:14:07,480 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4932 ms. 23:16:41 simulator | 2024-02-19 23:14:07,481 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:41 simulator | 2024-02-19 23:14:07,486 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:41 simulator | 2024-02-19 23:14:07,486 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 simulator | 2024-02-19 23:14:07,487 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:41 simulator | 2024-02-19 23:14:07,488 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:41 simulator | 2024-02-19 23:14:07,490 INFO Session workerName=node0 23:16:41 simulator | 2024-02-19 23:14:07,530 INFO Using GSON for REST calls 23:16:41 simulator | 2024-02-19 23:14:07,539 INFO Started o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE} 23:16:41 simulator | 2024-02-19 23:14:07,540 INFO Started VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:41 simulator | 2024-02-19 23:14:07,540 INFO Started Server@6035b93b{STARTING}[11.0.20,sto=0] @1720ms 23:16:41 simulator | 2024-02-19 23:14:07,540 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4947 ms. 23:16:41 simulator | 2024-02-19 23:14:07,541 INFO org.onap.policy.models.simulators started 23:16:41 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:14 23:16:41 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.639374321Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=875.954µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.64584543Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.647219336Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.377507ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.650785442Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.651248654Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.65473062Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.655217232Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=486.232µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.660466816Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.661727681Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.260665ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.664956196Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.671604377Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.64431ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.674812991Z level=info msg="Executing migration" id="create library_element table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.675746895Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=933.754µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.681699841Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.68366968Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.969489ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.687842729Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.689077605Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.234846ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.6946727Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.695722025Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.048955ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.69910822Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.700776258Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.667728ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.704304113Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.704340554Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=38.131µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.708733164Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.708825974Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=92.28µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.713972217Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.714340329Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=371.962µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.717018891Z level=info msg="Executing migration" id="create data_keys table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.718410437Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.390786ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.722188985Z level=info msg="Executing migration" id="create secrets table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.72339178Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.205175ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.729486887Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.775893057Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=46.40625ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.77883881Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.785945162Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.106472ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.789465188Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.789678779Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=213.261µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.792922324Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.845365801Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=52.442417ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.860317918Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.916572362Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=56.241344ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.921998616Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.922646759Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=647.523µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.925848583Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.926635897Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=786.304µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.930181073Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.930436614Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=255.191µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.933656579Z level=info msg="Executing migration" id="create permission table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.934498573Z level=info msg="Migration successfully executed" id="create permission table" duration=841.774µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.941102893Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:41 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:15 23:16:41 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:16 23:16:41 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:17 23:16:41 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1902242314140800u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:18 23:16:41 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1902242314140900u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1902242314141000u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1902242314141100u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1902242314141200u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1902242314141200u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1902242314141200u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1902242314141200u 1 2024-02-19 23:14:19 23:16:41 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1902242314141300u 1 2024-02-19 23:14:20 23:16:41 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1902242314141300u 1 2024-02-19 23:14:20 23:16:41 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1902242314141300u 1 2024-02-19 23:14:20 23:16:41 policy-db-migrator | policyadmin: OK @ 1300 23:16:41 kafka | zookeeper.ssl.ocsp.enable = false 23:16:41 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:41 kafka | zookeeper.ssl.truststore.location = null 23:16:41 kafka | zookeeper.ssl.truststore.password = null 23:16:41 kafka | zookeeper.ssl.truststore.type = null 23:16:41 kafka | (kafka.server.KafkaConfig) 23:16:41 kafka | [2024-02-19 23:14:16,739] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:41 kafka | [2024-02-19 23:14:16,741] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:41 kafka | [2024-02-19 23:14:16,744] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:41 kafka | [2024-02-19 23:14:16,744] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:41 kafka | [2024-02-19 23:14:16,773] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:16,779] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:16,789] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:16,791] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:16,792] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:16,804] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:41 kafka | [2024-02-19 23:14:16,851] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:41 kafka | [2024-02-19 23:14:16,889] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:41 kafka | [2024-02-19 23:14:16,903] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:41 kafka | [2024-02-19 23:14:16,930] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:41 kafka | [2024-02-19 23:14:17,258] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:41 kafka | [2024-02-19 23:14:17,277] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:41 kafka | [2024-02-19 23:14:17,278] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:41 kafka | [2024-02-19 23:14:17,283] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:41 kafka | [2024-02-19 23:14:17,288] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:41 kafka | [2024-02-19 23:14:17,312] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,315] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,317] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,318] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,320] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,331] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:41 kafka | [2024-02-19 23:14:17,333] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:41 kafka | [2024-02-19 23:14:17,360] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:41 kafka | [2024-02-19 23:14:17,384] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1708384457375,1708384457375,1,0,0,72057609629990913,258,0,27 23:16:41 kafka | (kafka.zk.KafkaZkClient) 23:16:41 kafka | [2024-02-19 23:14:17,386] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:41 kafka | [2024-02-19 23:14:17,452] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:41 kafka | [2024-02-19 23:14:17,459] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,467] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,468] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,471] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:41 kafka | [2024-02-19 23:14:17,483] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,486] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,489] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:41 kafka | [2024-02-19 23:14:17,502] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:41 policy-pap | sasl.login.read.timeout.ms = null 23:16:41 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.mechanism = GSSAPI 23:16:41 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:41 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:41 policy-pap | security.protocol = PLAINTEXT 23:16:41 policy-pap | security.providers = null 23:16:41 policy-pap | send.buffer.bytes = 131072 23:16:41 policy-pap | session.timeout.ms = 45000 23:16:41 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:41 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:41 policy-pap | ssl.cipher.suites = null 23:16:41 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:41 policy-pap | ssl.engine.factory.class = null 23:16:41 policy-pap | ssl.key.password = null 23:16:41 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:41 policy-pap | ssl.keystore.certificate.chain = null 23:16:41 policy-pap | ssl.keystore.key = null 23:16:41 policy-pap | ssl.keystore.location = null 23:16:41 policy-pap | ssl.keystore.password = null 23:16:41 policy-pap | ssl.keystore.type = JKS 23:16:41 policy-pap | ssl.protocol = TLSv1.3 23:16:41 policy-pap | ssl.provider = null 23:16:41 policy-pap | ssl.secure.random.implementation = null 23:16:41 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-pap | ssl.truststore.certificates = null 23:16:41 policy-pap | ssl.truststore.location = null 23:16:41 policy-pap | ssl.truststore.password = null 23:16:41 policy-pap | ssl.truststore.type = JKS 23:16:41 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-pap | 23:16:41 policy-pap | [2024-02-19T23:14:42.744+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.94276982Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.662937ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.946876379Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.947955763Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.078914ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.952967246Z level=info msg="Executing migration" id="create role table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.95396314Z level=info msg="Migration successfully executed" id="create role table" duration=993.974µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.961475734Z level=info msg="Executing migration" id="add column display_name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.972051002Z level=info msg="Migration successfully executed" id="add column display_name" duration=10.585268ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.978797732Z level=info msg="Executing migration" id="add column group_name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.989971253Z level=info msg="Migration successfully executed" id="add column group_name" duration=11.174441ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.994249962Z level=info msg="Executing migration" id="add index role.org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:17.994984996Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=734.444µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.000747912Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.00244477Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.697078ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.006479967Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.007600053Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.120586ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.011329619Z level=info msg="Executing migration" id="create team role table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.012087422Z level=info msg="Migration successfully executed" id="create team role table" duration=757.763µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.019449746Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.02056296Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.112864ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.025451433Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.027446352Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.989799ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.033550939Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.034849164Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.298125ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.039267584Z level=info msg="Executing migration" id="create user role table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.040399149Z level=info msg="Migration successfully executed" id="create user role table" duration=1.131205ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.04698172Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.048069194Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.087164ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.056193531Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.057893559Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.698858ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.062112857Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.063724135Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.602817ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.070347134Z level=info msg="Executing migration" id="create builtin role table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.071171288Z level=info msg="Migration successfully executed" id="create builtin role table" duration=824.274µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.075689688Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.076801133Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.115705ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.081892446Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.083028571Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.136165ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.086589737Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.094212721Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.622294ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.099732666Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.100834831Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.104805ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.105630613Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.10748795Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.856157ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.111100377Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:41 policy-pap | [2024-02-19T23:14:42.744+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-pap | [2024-02-19T23:14:42.744+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384482742 23:16:41 policy-pap | [2024-02-19T23:14:42.747+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-1, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Subscribed to topic(s): policy-pdp-pap 23:16:41 policy-pap | [2024-02-19T23:14:42.749+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:41 policy-pap | allow.auto.create.topics = true 23:16:41 policy-pap | auto.commit.interval.ms = 5000 23:16:41 policy-pap | auto.include.jmx.reporter = true 23:16:41 policy-pap | auto.offset.reset = latest 23:16:41 policy-pap | bootstrap.servers = [kafka:9092] 23:16:41 policy-pap | check.crcs = true 23:16:41 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:41 policy-pap | client.id = consumer-policy-pap-2 23:16:41 policy-pap | client.rack = 23:16:41 policy-pap | connections.max.idle.ms = 540000 23:16:41 policy-pap | default.api.timeout.ms = 60000 23:16:41 policy-pap | enable.auto.commit = true 23:16:41 policy-pap | exclude.internal.topics = true 23:16:41 policy-pap | fetch.max.bytes = 52428800 23:16:41 policy-pap | fetch.max.wait.ms = 500 23:16:41 policy-pap | fetch.min.bytes = 1 23:16:41 policy-pap | group.id = policy-pap 23:16:41 policy-pap | group.instance.id = null 23:16:41 policy-pap | heartbeat.interval.ms = 3000 23:16:41 policy-pap | interceptor.classes = [] 23:16:41 policy-pap | internal.leave.group.on.close = true 23:16:41 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:41 policy-pap | isolation.level = read_uncommitted 23:16:41 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-pap | max.partition.fetch.bytes = 1048576 23:16:41 policy-pap | max.poll.interval.ms = 300000 23:16:41 policy-pap | max.poll.records = 500 23:16:41 policy-pap | metadata.max.age.ms = 300000 23:16:41 policy-pap | metric.reporters = [] 23:16:41 policy-pap | metrics.num.samples = 2 23:16:41 policy-pap | metrics.recording.level = INFO 23:16:41 policy-pap | metrics.sample.window.ms = 30000 23:16:41 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:41 policy-pap | receive.buffer.bytes = 65536 23:16:41 policy-pap | reconnect.backoff.max.ms = 1000 23:16:41 policy-pap | reconnect.backoff.ms = 50 23:16:41 policy-pap | request.timeout.ms = 30000 23:16:41 policy-pap | retry.backoff.ms = 100 23:16:41 policy-pap | sasl.client.callback.handler.class = null 23:16:41 policy-pap | sasl.jaas.config = null 23:16:41 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-pap | sasl.kerberos.service.name = null 23:16:41 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-pap | sasl.login.callback.handler.class = null 23:16:41 policy-pap | sasl.login.class = null 23:16:41 policy-pap | sasl.login.connect.timeout.ms = null 23:16:41 policy-pap | sasl.login.read.timeout.ms = null 23:16:41 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.mechanism = GSSAPI 23:16:41 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:41 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.112825994Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.727367ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.119172452Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.120285448Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.112546ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.127830102Z level=info msg="Executing migration" id="create seed assignment table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.129255008Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.424366ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.134570522Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.136390081Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.820049ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.140644759Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.148639035Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.994366ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.152847153Z level=info msg="Executing migration" id="permission kind migration" 23:16:41 kafka | [2024-02-19 23:14:17,509] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:17,524] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:41 kafka | [2024-02-19 23:14:17,524] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,528] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,529] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:41 kafka | [2024-02-19 23:14:17,531] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,534] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,534] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:41 kafka | [2024-02-19 23:14:17,534] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:41 kafka | [2024-02-19 23:14:17,554] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,559] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,564] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:41 kafka | [2024-02-19 23:14:17,572] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:41 kafka | [2024-02-19 23:14:17,574] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,574] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,574] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,574] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,577] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,577] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,578] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,578] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:41 kafka | [2024-02-19 23:14:17,579] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,581] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:41 kafka | [2024-02-19 23:14:17,583] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:17,591] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:41 kafka | [2024-02-19 23:14:17,593] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:41 kafka | [2024-02-19 23:14:17,597] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:41 kafka | [2024-02-19 23:14:17,598] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:41 kafka | [2024-02-19 23:14:17,598] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:41 kafka | [2024-02-19 23:14:17,599] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:41 kafka | [2024-02-19 23:14:17,603] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 23:16:41 kafka | [2024-02-19 23:14:17,603] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:41 kafka | [2024-02-19 23:14:17,603] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,606] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.8:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 23:16:41 kafka | [2024-02-19 23:14:17,609] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 23:16:41 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 23:16:41 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 23:16:41 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 23:16:41 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 23:16:41 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 23:16:41 kafka | [2024-02-19 23:14:17,609] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,616] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 23:16:41 kafka | [2024-02-19 23:14:17,617] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,617] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,617] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,618] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,636] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:17,638] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:41 kafka | [2024-02-19 23:14:17,657] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:41 kafka | [2024-02-19 23:14:17,663] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:41 kafka | [2024-02-19 23:14:17,666] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:41 kafka | [2024-02-19 23:14:17,687] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:41 kafka | [2024-02-19 23:14:17,687] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:41 kafka | [2024-02-19 23:14:17,687] INFO Kafka startTimeMs: 1708384457678 (org.apache.kafka.common.utils.AppInfoParser) 23:16:41 kafka | [2024-02-19 23:14:17,690] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:41 kafka | [2024-02-19 23:14:17,721] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:41 kafka | [2024-02-19 23:14:17,816] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:17,859] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:41 kafka | [2024-02-19 23:14:17,897] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:41 kafka | [2024-02-19 23:14:22,637] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:22,638] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:45,005] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:45,014] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:45,016] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:41 kafka | [2024-02-19 23:14:45,017] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:41 kafka | [2024-02-19 23:14:45,058] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(TVE3Kq3BQlWiihp0MJOTdw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(LJ8qdtXjQImen0RiTLNUHA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:45,060] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:41 kafka | [2024-02-19 23:14:45,062] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,062] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,062] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,062] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.158389198Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.539705ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.162359037Z level=info msg="Executing migration" id="permission attribute migration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.170106852Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.749185ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.175572436Z level=info msg="Executing migration" id="permission identifier migration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.183447941Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.875285ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.18979708Z level=info msg="Executing migration" id="add permission identifier index" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.190629183Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=831.943µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.199210842Z level=info msg="Executing migration" id="create query_history table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.200672899Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.461447ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.204547346Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.206359334Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.811098ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.212415611Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.212506992Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=92.171µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.216117238Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.216184348Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=75.35µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.219882964Z level=info msg="Executing migration" id="teams permissions migration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.220733849Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=850.995µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.224699126Z level=info msg="Executing migration" id="dashboard permissions" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.225768601Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.071665ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.231324746Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.232010439Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=690.283µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.235487285Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.235804696Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=316.791µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.239654953Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.240194226Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=539.363µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.275283773Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.276554349Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.270516ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.285244568Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.286502203Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.257335ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.291177774Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.303328969Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=12.152555ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.310367471Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.310439511Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=72.9µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.316560718Z level=info msg="Executing migration" id="create correlation table v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.318009314Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.449026ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.321825162Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.323022017Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.196855ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.32799057Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.329527997Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.535727ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.335365793Z level=info msg="Executing migration" id="add correlation config column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.344659654Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.294541ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.348340901Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,063] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,064] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,070] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,071] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 policy-pap | security.protocol = PLAINTEXT 23:16:41 policy-pap | security.providers = null 23:16:41 policy-pap | send.buffer.bytes = 131072 23:16:41 policy-pap | session.timeout.ms = 45000 23:16:41 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:41 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:41 policy-pap | ssl.cipher.suites = null 23:16:41 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:41 policy-pap | ssl.engine.factory.class = null 23:16:41 policy-pap | ssl.key.password = null 23:16:41 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:41 policy-pap | ssl.keystore.certificate.chain = null 23:16:41 policy-pap | ssl.keystore.key = null 23:16:41 policy-pap | ssl.keystore.location = null 23:16:41 policy-pap | ssl.keystore.password = null 23:16:41 policy-pap | ssl.keystore.type = JKS 23:16:41 policy-pap | ssl.protocol = TLSv1.3 23:16:41 policy-pap | ssl.provider = null 23:16:41 policy-pap | ssl.secure.random.implementation = null 23:16:41 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-pap | ssl.truststore.certificates = null 23:16:41 policy-pap | ssl.truststore.location = null 23:16:41 policy-pap | ssl.truststore.password = null 23:16:41 policy-pap | ssl.truststore.type = JKS 23:16:41 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-pap | 23:16:41 policy-pap | [2024-02-19T23:14:42.763+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 policy-pap | [2024-02-19T23:14:42.763+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-pap | [2024-02-19T23:14:42.763+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384482763 23:16:41 policy-pap | [2024-02-19T23:14:42.763+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.349432096Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.091385ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.35478798Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.355897215Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.104915ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.361931022Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.39283473Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=30.903998ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.396364106Z level=info msg="Executing migration" id="create correlation v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.39707312Z level=info msg="Migration successfully executed" id="create correlation v2" duration=708.694µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.401955711Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.403172187Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.216246ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.406686882Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:41 policy-pap | [2024-02-19T23:14:43.074+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:41 policy-pap | [2024-02-19T23:14:43.221+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:41 policy-pap | [2024-02-19T23:14:43.462+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@49fb693d, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@38197e82, org.springframework.security.web.context.SecurityContextHolderFilter@6e12f38c, org.springframework.security.web.header.HeaderWriterFilter@1d33e72e, org.springframework.security.web.authentication.logout.LogoutFilter@6a97517, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@291028d7, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5a9baba8, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@b5311cb, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5516ee5, org.springframework.security.web.access.ExceptionTranslationFilter@534d0cfa, org.springframework.security.web.access.intercept.AuthorizationFilter@3361d286] 23:16:41 policy-pap | [2024-02-19T23:14:44.308+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:41 policy-pap | [2024-02-19T23:14:44.401+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:41 policy-pap | [2024-02-19T23:14:44.427+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:41 policy-pap | [2024-02-19T23:14:44.446+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:41 policy-pap | [2024-02-19T23:14:44.446+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:41 policy-pap | [2024-02-19T23:14:44.446+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:41 policy-pap | [2024-02-19T23:14:44.447+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:41 policy-pap | [2024-02-19T23:14:44.447+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:41 policy-pap | [2024-02-19T23:14:44.447+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:41 policy-pap | [2024-02-19T23:14:44.447+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:41 policy-pap | [2024-02-19T23:14:44.451+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@40071890 23:16:41 policy-pap | [2024-02-19T23:14:44.461+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:41 policy-pap | [2024-02-19T23:14:44.462+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:41 policy-pap | allow.auto.create.topics = true 23:16:41 policy-pap | auto.commit.interval.ms = 5000 23:16:41 policy-pap | auto.include.jmx.reporter = true 23:16:41 policy-pap | auto.offset.reset = latest 23:16:41 policy-pap | bootstrap.servers = [kafka:9092] 23:16:41 policy-pap | check.crcs = true 23:16:41 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:41 policy-pap | client.id = consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3 23:16:41 policy-pap | client.rack = 23:16:41 policy-pap | connections.max.idle.ms = 540000 23:16:41 policy-pap | default.api.timeout.ms = 60000 23:16:41 policy-pap | enable.auto.commit = true 23:16:41 policy-pap | exclude.internal.topics = true 23:16:41 policy-pap | fetch.max.bytes = 52428800 23:16:41 policy-pap | fetch.max.wait.ms = 500 23:16:41 policy-pap | fetch.min.bytes = 1 23:16:41 policy-pap | group.id = d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73 23:16:41 policy-pap | group.instance.id = null 23:16:41 policy-pap | heartbeat.interval.ms = 3000 23:16:41 policy-pap | interceptor.classes = [] 23:16:41 policy-pap | internal.leave.group.on.close = true 23:16:41 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:41 policy-pap | isolation.level = read_uncommitted 23:16:41 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-pap | max.partition.fetch.bytes = 1048576 23:16:41 policy-pap | max.poll.interval.ms = 300000 23:16:41 policy-pap | max.poll.records = 500 23:16:41 policy-pap | metadata.max.age.ms = 300000 23:16:41 policy-pap | metric.reporters = [] 23:16:41 policy-pap | metrics.num.samples = 2 23:16:41 policy-pap | metrics.recording.level = INFO 23:16:41 policy-pap | metrics.sample.window.ms = 30000 23:16:41 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:41 policy-pap | receive.buffer.bytes = 65536 23:16:41 policy-pap | reconnect.backoff.max.ms = 1000 23:16:41 policy-pap | reconnect.backoff.ms = 50 23:16:41 policy-pap | request.timeout.ms = 30000 23:16:41 policy-pap | retry.backoff.ms = 100 23:16:41 policy-pap | sasl.client.callback.handler.class = null 23:16:41 policy-pap | sasl.jaas.config = null 23:16:41 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-pap | sasl.kerberos.service.name = null 23:16:41 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-pap | sasl.login.callback.handler.class = null 23:16:41 policy-pap | sasl.login.class = null 23:16:41 policy-pap | sasl.login.connect.timeout.ms = null 23:16:41 policy-pap | sasl.login.read.timeout.ms = null 23:16:41 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.407922568Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.235546ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.411448384Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.412665309Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.215845ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.418102323Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.418540235Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=437.662µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.422142712Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.423349777Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.203775ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.426883123Z level=info msg="Executing migration" id="add provisioning column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.435081509Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.198066ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.440684585Z level=info msg="Executing migration" id="create entity_events table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.441462878Z level=info msg="Migration successfully executed" id="create entity_events table" duration=778.163µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.445159595Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.446102809Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=942.964µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.449570584Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.450049377Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.455463331Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.455924204Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.460336953Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.461627149Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.290676ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.465412476Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.466870643Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.460407ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.473961424Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.47515949Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.197076ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.480282722Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.482128731Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.897269ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.489358444Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.490489008Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.130184ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.495683802Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.49758579Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.901638ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.502600262Z level=info msg="Executing migration" id="Drop public config table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.503411036Z level=info msg="Migration successfully executed" id="Drop public config table" duration=810.744µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.507918846Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.508969172Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.049726ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.512674678Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.514490966Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.815228ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.52430085Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.526397649Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=7.329753ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.531113341Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.533810712Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.697941ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.541065315Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.573719172Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=32.654167ms 23:16:41 policy-pap | sasl.mechanism = GSSAPI 23:16:41 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:41 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:41 policy-pap | security.protocol = PLAINTEXT 23:16:41 policy-pap | security.providers = null 23:16:41 policy-pap | send.buffer.bytes = 131072 23:16:41 policy-pap | session.timeout.ms = 45000 23:16:41 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:41 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.577115217Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.585793366Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.678239ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.589326461Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.595510678Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.183657ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.600424311Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.600849063Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=426.742µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.604676611Z level=info msg="Executing migration" id="add share column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.61343101Z level=info msg="Migration successfully executed" id="add share column" duration=8.754139ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.617915709Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:41 kafka | [2024-02-19 23:14:45,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,072] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,072] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,084] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,085] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,086] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.618192451Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=276.412µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.622933902Z level=info msg="Executing migration" id="create file table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.623806746Z level=info msg="Migration successfully executed" id="create file table" duration=872.354µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.627577713Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.629262351Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.677848ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.633044578Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.634787196Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.740148ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.638627773Z level=info msg="Executing migration" id="create file_meta table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.639474856Z level=info msg="Migration successfully executed" id="create file_meta table" duration=841.033µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.644056757Z level=info msg="Executing migration" id="file table idx: path key" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.645617214Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.558147ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.652149273Z level=info msg="Executing migration" id="set path collation in file table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.652296124Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=146.991µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.694511533Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.694613224Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=102.101µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.700695221Z level=info msg="Executing migration" id="managed permissions migration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.701594075Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=898.564µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.705699834Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.705942805Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=242.731µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.70958865Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.710326244Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=741.434µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.715046595Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.727567642Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=12.525227ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.732411993Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.732716094Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=304.161µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.738054878Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.739250214Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.195216ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.747629971Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.748432305Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=802.614µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.754833323Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.755228876Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=394.893µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.758979642Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.759753685Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=772.973µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.763427902Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.772252322Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.82273ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.777196664Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.785932913Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.734099ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.790983175Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.79178098Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=797.665µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.796074349Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.905235998Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=109.1648ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.910229641Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.911017224Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=786.733µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.91454405Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.915437184Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=892.624µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.91904942Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.956626048Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=37.573448ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.963419599Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.96358959Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=166.831µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.967150836Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,087] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,088] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,313] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,314] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:41 policy-pap | ssl.cipher.suites = null 23:16:41 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:41 policy-pap | ssl.engine.factory.class = null 23:16:41 policy-pap | ssl.key.password = null 23:16:41 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:41 policy-pap | ssl.keystore.certificate.chain = null 23:16:41 policy-pap | ssl.keystore.key = null 23:16:41 policy-pap | ssl.keystore.location = null 23:16:41 policy-pap | ssl.keystore.password = null 23:16:41 policy-pap | ssl.keystore.type = JKS 23:16:41 policy-pap | ssl.protocol = TLSv1.3 23:16:41 policy-pap | ssl.provider = null 23:16:41 policy-pap | ssl.secure.random.implementation = null 23:16:41 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-pap | ssl.truststore.certificates = null 23:16:41 policy-pap | ssl.truststore.location = null 23:16:41 policy-pap | ssl.truststore.password = null 23:16:41 policy-pap | ssl.truststore.type = JKS 23:16:41 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-pap | 23:16:41 policy-pap | [2024-02-19T23:14:44.466+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 policy-pap | [2024-02-19T23:14:44.466+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-pap | [2024-02-19T23:14:44.466+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384484466 23:16:41 policy-pap | [2024-02-19T23:14:44.466+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Subscribed to topic(s): policy-pdp-pap 23:16:41 policy-pap | [2024-02-19T23:14:44.467+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:41 policy-pap | [2024-02-19T23:14:44.467+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=84cd7010-e55c-4ef3-a0d9-34c0a94040db, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@277474fc 23:16:41 policy-pap | [2024-02-19T23:14:44.467+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=84cd7010-e55c-4ef3-a0d9-34c0a94040db, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:41 policy-pap | [2024-02-19T23:14:44.467+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:41 policy-pap | allow.auto.create.topics = true 23:16:41 policy-pap | auto.commit.interval.ms = 5000 23:16:41 policy-pap | auto.include.jmx.reporter = true 23:16:41 policy-pap | auto.offset.reset = latest 23:16:41 policy-pap | bootstrap.servers = [kafka:9092] 23:16:41 policy-pap | check.crcs = true 23:16:41 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:41 policy-pap | client.id = consumer-policy-pap-4 23:16:41 policy-pap | client.rack = 23:16:41 policy-pap | connections.max.idle.ms = 540000 23:16:41 policy-pap | default.api.timeout.ms = 60000 23:16:41 policy-pap | enable.auto.commit = true 23:16:41 policy-pap | exclude.internal.topics = true 23:16:41 policy-pap | fetch.max.bytes = 52428800 23:16:41 policy-pap | fetch.max.wait.ms = 500 23:16:41 policy-pap | fetch.min.bytes = 1 23:16:41 policy-pap | group.id = policy-pap 23:16:41 policy-pap | group.instance.id = null 23:16:41 policy-pap | heartbeat.interval.ms = 3000 23:16:41 policy-pap | interceptor.classes = [] 23:16:41 policy-pap | internal.leave.group.on.close = true 23:16:41 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:41 policy-pap | isolation.level = read_uncommitted 23:16:41 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-pap | max.partition.fetch.bytes = 1048576 23:16:41 policy-pap | max.poll.interval.ms = 300000 23:16:41 policy-pap | max.poll.records = 500 23:16:41 policy-pap | metadata.max.age.ms = 300000 23:16:41 policy-pap | metric.reporters = [] 23:16:41 policy-pap | metrics.num.samples = 2 23:16:41 policy-pap | metrics.recording.level = INFO 23:16:41 policy-pap | metrics.sample.window.ms = 30000 23:16:41 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:41 policy-pap | receive.buffer.bytes = 65536 23:16:41 policy-pap | reconnect.backoff.max.ms = 1000 23:16:41 policy-pap | reconnect.backoff.ms = 50 23:16:41 policy-pap | request.timeout.ms = 30000 23:16:41 policy-pap | retry.backoff.ms = 100 23:16:41 policy-pap | sasl.client.callback.handler.class = null 23:16:41 policy-pap | sasl.jaas.config = null 23:16:41 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-pap | sasl.kerberos.service.name = null 23:16:41 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-pap | sasl.login.callback.handler.class = null 23:16:41 policy-pap | sasl.login.class = null 23:16:41 policy-pap | sasl.login.connect.timeout.ms = null 23:16:41 policy-pap | sasl.login.read.timeout.ms = null 23:16:41 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.mechanism = GSSAPI 23:16:41 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:41 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:41 policy-pap | security.protocol = PLAINTEXT 23:16:41 policy-pap | security.providers = null 23:16:41 policy-pap | send.buffer.bytes = 131072 23:16:41 policy-pap | session.timeout.ms = 45000 23:16:41 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.967507347Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=361.391µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.972187939Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.97243898Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=247.591µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.976247006Z level=info msg="Executing migration" id="create folder table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.977327082Z level=info msg="Migration successfully executed" id="create folder table" duration=1.098276ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.981095218Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.982291144Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.195736ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.987030035Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.988212251Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.181916ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.992284989Z level=info msg="Executing migration" id="Update folder title length" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.992333829Z level=info msg="Migration successfully executed" id="Update folder title length" duration=50.67µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.997507452Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:18.999372751Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.863889ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.005598008Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.007969259Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=2.384831ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.013576874Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.01494294Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.366476ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.019477961Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.020054123Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=575.292µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.024494573Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.024913035Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=417.832µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.028513131Z level=info msg="Executing migration" id="create anon_device table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.029538515Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.022744ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.034174346Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.035501722Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.327116ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.03961007Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.041070087Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.458587ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.04629579Z level=info msg="Executing migration" id="create signing_key table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.047683856Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.386596ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.051973425Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.053340511Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.366356ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.056819357Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.058456804Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.636207ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.064520731Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.065136414Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=614.793µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.069108482Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.082045959Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.932067ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.119974408Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.121176333Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.204325ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.128224665Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.130217254Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.996709ms 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.135677358Z level=info msg="Executing migration" id="create sso_setting table" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.136663953Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=985.695µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.142457339Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.143748444Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.292785ms 23:16:41 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:41 policy-pap | ssl.cipher.suites = null 23:16:41 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:41 policy-pap | ssl.engine.factory.class = null 23:16:41 policy-pap | ssl.key.password = null 23:16:41 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:41 policy-pap | ssl.keystore.certificate.chain = null 23:16:41 policy-pap | ssl.keystore.key = null 23:16:41 policy-pap | ssl.keystore.location = null 23:16:41 policy-pap | ssl.keystore.password = null 23:16:41 policy-pap | ssl.keystore.type = JKS 23:16:41 policy-pap | ssl.protocol = TLSv1.3 23:16:41 policy-pap | ssl.provider = null 23:16:41 policy-pap | ssl.secure.random.implementation = null 23:16:41 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-pap | ssl.truststore.certificates = null 23:16:41 policy-pap | ssl.truststore.location = null 23:16:41 policy-pap | ssl.truststore.password = null 23:16:41 policy-pap | ssl.truststore.type = JKS 23:16:41 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:41 policy-pap | 23:16:41 policy-pap | [2024-02-19T23:14:44.471+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 policy-pap | [2024-02-19T23:14:44.471+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-pap | [2024-02-19T23:14:44.471+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384484471 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.147263219Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.147746182Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=483.363µs 23:16:41 grafana | logger=migrator t=2024-02-19T23:14:19.15405192Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.162018658s 23:16:41 grafana | logger=sqlstore t=2024-02-19T23:14:19.163509312Z level=info msg="Created default admin" user=admin 23:16:41 grafana | logger=sqlstore t=2024-02-19T23:14:19.163816253Z level=info msg="Created default organization" 23:16:41 grafana | logger=secrets t=2024-02-19T23:14:19.168013322Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:41 grafana | logger=plugin.store t=2024-02-19T23:14:19.183952834Z level=info msg="Loading plugins..." 23:16:41 grafana | logger=local.finder t=2024-02-19T23:14:19.220463636Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,318] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:41 grafana | logger=plugin.store t=2024-02-19T23:14:19.220522107Z level=info msg="Plugins loaded" count=55 duration=36.594283ms 23:16:41 grafana | logger=query_data t=2024-02-19T23:14:19.222782366Z level=info msg="Query Service initialization" 23:16:41 grafana | logger=live.push_http t=2024-02-19T23:14:19.226267432Z level=info msg="Live Push Gateway initialization" 23:16:41 grafana | logger=ngalert.migration t=2024-02-19T23:14:19.230603511Z level=info msg=Starting 23:16:41 grafana | logger=ngalert.migration orgID=1 t=2024-02-19T23:14:19.231273614Z level=info msg="Migrating alerts for organisation" 23:16:41 grafana | logger=ngalert.migration orgID=1 t=2024-02-19T23:14:19.231849567Z level=info msg="Alerts found to migrate" alerts=0 23:16:41 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-19T23:14:19.233436814Z level=info msg="Completed legacy migration" 23:16:41 grafana | logger=infra.usagestats.collector t=2024-02-19T23:14:19.269077893Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:41 grafana | logger=provisioning.datasources t=2024-02-19T23:14:19.271478864Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:41 grafana | logger=provisioning.alerting t=2024-02-19T23:14:19.290115956Z level=info msg="starting to provision alerting" 23:16:41 grafana | logger=provisioning.alerting t=2024-02-19T23:14:19.290159496Z level=info msg="finished to provision alerting" 23:16:41 grafana | logger=grafanaStorageLogger t=2024-02-19T23:14:19.290647859Z level=info msg="Storage starting" 23:16:41 grafana | logger=ngalert.state.manager t=2024-02-19T23:14:19.29090983Z level=info msg="Warming state cache for startup" 23:16:41 grafana | logger=http.server t=2024-02-19T23:14:19.296507995Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:41 grafana | logger=ngalert.multiorg.alertmanager t=2024-02-19T23:14:19.296718956Z level=info msg="Starting MultiOrg Alertmanager" 23:16:41 grafana | logger=grafana-apiserver t=2024-02-19T23:14:19.297957621Z level=info msg="Authentication is disabled" 23:16:41 grafana | logger=grafana-apiserver t=2024-02-19T23:14:19.303711127Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:41 grafana | logger=ngalert.state.manager t=2024-02-19T23:14:19.335056456Z level=info msg="State cache has been initialized" states=0 duration=44.140736ms 23:16:41 grafana | logger=ngalert.scheduler t=2024-02-19T23:14:19.335112577Z level=info msg="Starting scheduler" tickInterval=10s 23:16:41 grafana | logger=ticker t=2024-02-19T23:14:19.335185748Z level=info msg=starting first_tick=2024-02-19T23:14:20Z 23:16:41 grafana | logger=sqlstore.transactions t=2024-02-19T23:14:19.36728353Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:41 grafana | logger=sqlstore.transactions t=2024-02-19T23:14:19.379800996Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:41 grafana | logger=plugins.update.checker t=2024-02-19T23:14:19.402601158Z level=info msg="Update check succeeded" duration=104.312196ms 23:16:41 grafana | logger=sqlstore.transactions t=2024-02-19T23:14:19.454067737Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:41 grafana | logger=sqlstore.transactions t=2024-02-19T23:14:19.473441373Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:41 grafana | logger=sqlstore.transactions t=2024-02-19T23:14:19.484567873Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:41 grafana | logger=sqlstore.transactions t=2024-02-19T23:14:19.495213141Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 23:16:41 grafana | logger=sqlstore.transactions t=2024-02-19T23:14:19.506565741Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 23:16:41 grafana | logger=grafana.update.checker t=2024-02-19T23:14:19.627293839Z level=info msg="Update check succeeded" duration=334.31299ms 23:16:41 grafana | logger=infra.usagestats t=2024-02-19T23:15:57.303358314Z level=info msg="Usage stats are ready to report" 23:16:41 policy-pap | [2024-02-19T23:14:44.471+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:41 policy-pap | [2024-02-19T23:14:44.471+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:41 policy-pap | [2024-02-19T23:14:44.471+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=84cd7010-e55c-4ef3-a0d9-34c0a94040db, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:41 policy-pap | [2024-02-19T23:14:44.471+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:41 policy-pap | [2024-02-19T23:14:44.471+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ff45a3bc-6234-42de-8368-517e19747cd9, alive=false, publisher=null]]: starting 23:16:41 policy-pap | [2024-02-19T23:14:44.488+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:41 policy-pap | acks = -1 23:16:41 policy-pap | auto.include.jmx.reporter = true 23:16:41 policy-pap | batch.size = 16384 23:16:41 policy-pap | bootstrap.servers = [kafka:9092] 23:16:41 policy-pap | buffer.memory = 33554432 23:16:41 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:41 policy-pap | client.id = producer-1 23:16:41 policy-pap | compression.type = none 23:16:41 policy-pap | connections.max.idle.ms = 540000 23:16:41 policy-pap | delivery.timeout.ms = 120000 23:16:41 policy-pap | enable.idempotence = true 23:16:41 policy-pap | interceptor.classes = [] 23:16:41 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:41 policy-pap | linger.ms = 0 23:16:41 policy-pap | max.block.ms = 60000 23:16:41 policy-pap | max.in.flight.requests.per.connection = 5 23:16:41 policy-pap | max.request.size = 1048576 23:16:41 policy-pap | metadata.max.age.ms = 300000 23:16:41 policy-pap | metadata.max.idle.ms = 300000 23:16:41 policy-pap | metric.reporters = [] 23:16:41 policy-pap | metrics.num.samples = 2 23:16:41 policy-pap | metrics.recording.level = INFO 23:16:41 policy-pap | metrics.sample.window.ms = 30000 23:16:41 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:41 policy-pap | partitioner.availability.timeout.ms = 0 23:16:41 policy-pap | partitioner.class = null 23:16:41 policy-pap | partitioner.ignore.keys = false 23:16:41 policy-pap | receive.buffer.bytes = 32768 23:16:41 policy-pap | reconnect.backoff.max.ms = 1000 23:16:41 policy-pap | reconnect.backoff.ms = 50 23:16:41 policy-pap | request.timeout.ms = 30000 23:16:41 policy-pap | retries = 2147483647 23:16:41 policy-pap | retry.backoff.ms = 100 23:16:41 policy-pap | sasl.client.callback.handler.class = null 23:16:41 policy-pap | sasl.jaas.config = null 23:16:41 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-pap | sasl.kerberos.service.name = null 23:16:41 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-pap | sasl.login.callback.handler.class = null 23:16:41 policy-pap | sasl.login.class = null 23:16:41 policy-pap | sasl.login.connect.timeout.ms = null 23:16:41 policy-pap | sasl.login.read.timeout.ms = null 23:16:41 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.mechanism = GSSAPI 23:16:41 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:41 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:41 policy-pap | security.protocol = PLAINTEXT 23:16:41 policy-pap | security.providers = null 23:16:41 policy-pap | send.buffer.bytes = 131072 23:16:41 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:41 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:41 policy-pap | ssl.cipher.suites = null 23:16:41 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:41 policy-pap | ssl.engine.factory.class = null 23:16:41 policy-pap | ssl.key.password = null 23:16:41 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:41 policy-pap | ssl.keystore.certificate.chain = null 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,319] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,320] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:41 policy-pap | ssl.keystore.key = null 23:16:41 policy-pap | ssl.keystore.location = null 23:16:41 policy-pap | ssl.keystore.password = null 23:16:41 policy-pap | ssl.keystore.type = JKS 23:16:41 policy-pap | ssl.protocol = TLSv1.3 23:16:41 policy-pap | ssl.provider = null 23:16:41 policy-pap | ssl.secure.random.implementation = null 23:16:41 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-pap | ssl.truststore.certificates = null 23:16:41 policy-pap | ssl.truststore.location = null 23:16:41 policy-pap | ssl.truststore.password = null 23:16:41 policy-pap | ssl.truststore.type = JKS 23:16:41 policy-pap | transaction.timeout.ms = 60000 23:16:41 policy-pap | transactional.id = null 23:16:41 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:41 policy-pap | 23:16:41 policy-pap | [2024-02-19T23:14:44.500+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:41 policy-pap | [2024-02-19T23:14:44.530+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 policy-pap | [2024-02-19T23:14:44.530+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-pap | [2024-02-19T23:14:44.530+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384484530 23:16:41 policy-pap | [2024-02-19T23:14:44.530+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ff45a3bc-6234-42de-8368-517e19747cd9, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:41 policy-pap | [2024-02-19T23:14:44.531+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=96d55021-9b66-4ff9-bac4-f91d46760353, alive=false, publisher=null]]: starting 23:16:41 policy-pap | [2024-02-19T23:14:44.531+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:41 policy-pap | acks = -1 23:16:41 policy-pap | auto.include.jmx.reporter = true 23:16:41 policy-pap | batch.size = 16384 23:16:41 policy-pap | bootstrap.servers = [kafka:9092] 23:16:41 policy-pap | buffer.memory = 33554432 23:16:41 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:41 policy-pap | client.id = producer-2 23:16:41 policy-pap | compression.type = none 23:16:41 policy-pap | connections.max.idle.ms = 540000 23:16:41 policy-pap | delivery.timeout.ms = 120000 23:16:41 policy-pap | enable.idempotence = true 23:16:41 policy-pap | interceptor.classes = [] 23:16:41 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:41 policy-pap | linger.ms = 0 23:16:41 policy-pap | max.block.ms = 60000 23:16:41 policy-pap | max.in.flight.requests.per.connection = 5 23:16:41 policy-pap | max.request.size = 1048576 23:16:41 policy-pap | metadata.max.age.ms = 300000 23:16:41 policy-pap | metadata.max.idle.ms = 300000 23:16:41 kafka | [2024-02-19 23:14:45,320] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,320] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,320] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,329] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,333] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,335] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 policy-pap | metric.reporters = [] 23:16:41 policy-pap | metrics.num.samples = 2 23:16:41 policy-pap | metrics.recording.level = INFO 23:16:41 policy-pap | metrics.sample.window.ms = 30000 23:16:41 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:41 policy-pap | partitioner.availability.timeout.ms = 0 23:16:41 policy-pap | partitioner.class = null 23:16:41 policy-pap | partitioner.ignore.keys = false 23:16:41 policy-pap | receive.buffer.bytes = 32768 23:16:41 policy-pap | reconnect.backoff.max.ms = 1000 23:16:41 policy-pap | reconnect.backoff.ms = 50 23:16:41 policy-pap | request.timeout.ms = 30000 23:16:41 policy-pap | retries = 2147483647 23:16:41 policy-pap | retry.backoff.ms = 100 23:16:41 policy-pap | sasl.client.callback.handler.class = null 23:16:41 policy-pap | sasl.jaas.config = null 23:16:41 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:41 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:41 policy-pap | sasl.kerberos.service.name = null 23:16:41 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:41 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:41 policy-pap | sasl.login.callback.handler.class = null 23:16:41 policy-pap | sasl.login.class = null 23:16:41 policy-pap | sasl.login.connect.timeout.ms = null 23:16:41 policy-pap | sasl.login.read.timeout.ms = null 23:16:41 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:41 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:41 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:41 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:41 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.mechanism = GSSAPI 23:16:41 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:41 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:41 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:41 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:41 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:41 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:41 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:41 policy-pap | security.protocol = PLAINTEXT 23:16:41 policy-pap | security.providers = null 23:16:41 policy-pap | send.buffer.bytes = 131072 23:16:41 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:41 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:41 policy-pap | ssl.cipher.suites = null 23:16:41 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:41 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:41 policy-pap | ssl.engine.factory.class = null 23:16:41 policy-pap | ssl.key.password = null 23:16:41 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:41 policy-pap | ssl.keystore.certificate.chain = null 23:16:41 policy-pap | ssl.keystore.key = null 23:16:41 policy-pap | ssl.keystore.location = null 23:16:41 policy-pap | ssl.keystore.password = null 23:16:41 policy-pap | ssl.keystore.type = JKS 23:16:41 policy-pap | ssl.protocol = TLSv1.3 23:16:41 policy-pap | ssl.provider = null 23:16:41 policy-pap | ssl.secure.random.implementation = null 23:16:41 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:41 policy-pap | ssl.truststore.certificates = null 23:16:41 policy-pap | ssl.truststore.location = null 23:16:41 policy-pap | ssl.truststore.password = null 23:16:41 policy-pap | ssl.truststore.type = JKS 23:16:41 policy-pap | transaction.timeout.ms = 60000 23:16:41 policy-pap | transactional.id = null 23:16:41 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:41 policy-pap | 23:16:41 policy-pap | [2024-02-19T23:14:44.532+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:41 policy-pap | [2024-02-19T23:14:44.534+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:41 policy-pap | [2024-02-19T23:14:44.534+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:41 policy-pap | [2024-02-19T23:14:44.534+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708384484534 23:16:41 policy-pap | [2024-02-19T23:14:44.535+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=96d55021-9b66-4ff9-bac4-f91d46760353, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:41 policy-pap | [2024-02-19T23:14:44.535+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:41 policy-pap | [2024-02-19T23:14:44.535+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:41 policy-pap | [2024-02-19T23:14:44.536+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:41 policy-pap | [2024-02-19T23:14:44.537+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:41 policy-pap | [2024-02-19T23:14:44.540+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:41 policy-pap | [2024-02-19T23:14:44.540+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:41 policy-pap | [2024-02-19T23:14:44.542+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:41 policy-pap | [2024-02-19T23:14:44.542+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:41 policy-pap | [2024-02-19T23:14:44.542+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:41 policy-pap | [2024-02-19T23:14:44.544+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:41 policy-pap | [2024-02-19T23:14:44.544+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:41 policy-pap | [2024-02-19T23:14:44.545+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.494 seconds (process running for 11.123) 23:16:41 policy-pap | [2024-02-19T23:14:44.980+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:41 policy-pap | [2024-02-19T23:14:44.982+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: afQCmge3SLiyxoKHB7mgXQ 23:16:41 policy-pap | [2024-02-19T23:14:44.982+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: afQCmge3SLiyxoKHB7mgXQ 23:16:41 policy-pap | [2024-02-19T23:14:44.984+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: afQCmge3SLiyxoKHB7mgXQ 23:16:41 policy-pap | [2024-02-19T23:14:45.055+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.055+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Cluster ID: afQCmge3SLiyxoKHB7mgXQ 23:16:41 policy-pap | [2024-02-19T23:14:45.081+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.104+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 23:16:41 policy-pap | [2024-02-19T23:14:45.106+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 23:16:41 policy-pap | [2024-02-19T23:14:45.191+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:41 policy-pap | [2024-02-19T23:14:45.200+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,336] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,341] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,343] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,343] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,343] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,343] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,343] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,343] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,343] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,344] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,345] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,346] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,347] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,405] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,406] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:41 policy-pap | [2024-02-19T23:14:45.302+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:41 policy-pap | [2024-02-19T23:14:45.309+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.413+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.438+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.521+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.546+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.626+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.652+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.730+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.757+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.836+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.862+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.947+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:45.970+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:41 policy-pap | [2024-02-19T23:14:46.061+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:41 policy-pap | [2024-02-19T23:14:46.070+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] (Re-)joining group 23:16:41 policy-pap | [2024-02-19T23:14:46.075+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:41 policy-pap | [2024-02-19T23:14:46.078+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:41 policy-pap | [2024-02-19T23:14:46.103+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Request joining group due to: need to re-join with the given member-id: consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3-3d8c0b64-a932-4c61-8259-d6a7a58c73fa 23:16:41 policy-pap | [2024-02-19T23:14:46.103+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:41 policy-pap | [2024-02-19T23:14:46.103+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] (Re-)joining group 23:16:41 policy-pap | [2024-02-19T23:14:46.105+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-cc9d25bb-0239-4ab2-ab72-09c9a8b909d4 23:16:41 policy-pap | [2024-02-19T23:14:46.106+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:41 policy-pap | [2024-02-19T23:14:46.106+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:41 policy-pap | [2024-02-19T23:14:49.131+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Successfully joined group with generation Generation{generationId=1, memberId='consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3-3d8c0b64-a932-4c61-8259-d6a7a58c73fa', protocol='range'} 23:16:41 policy-pap | [2024-02-19T23:14:49.138+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-cc9d25bb-0239-4ab2-ab72-09c9a8b909d4', protocol='range'} 23:16:41 policy-pap | [2024-02-19T23:14:49.146+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-cc9d25bb-0239-4ab2-ab72-09c9a8b909d4=Assignment(partitions=[policy-pdp-pap-0])} 23:16:41 policy-pap | [2024-02-19T23:14:49.146+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Finished assignment for group at generation 1: {consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3-3d8c0b64-a932-4c61-8259-d6a7a58c73fa=Assignment(partitions=[policy-pdp-pap-0])} 23:16:41 policy-pap | [2024-02-19T23:14:49.187+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-cc9d25bb-0239-4ab2-ab72-09c9a8b909d4', protocol='range'} 23:16:41 policy-pap | [2024-02-19T23:14:49.188+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:41 policy-pap | [2024-02-19T23:14:49.189+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Successfully synced group in generation Generation{generationId=1, memberId='consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3-3d8c0b64-a932-4c61-8259-d6a7a58c73fa', protocol='range'} 23:16:41 policy-pap | [2024-02-19T23:14:49.189+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:41 policy-pap | [2024-02-19T23:14:49.192+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:41 policy-pap | [2024-02-19T23:14:49.192+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Adding newly assigned partitions: policy-pdp-pap-0 23:16:41 policy-pap | [2024-02-19T23:14:49.215+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:41 policy-pap | [2024-02-19T23:14:49.215+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Found no committed offset for partition policy-pdp-pap-0 23:16:41 policy-pap | [2024-02-19T23:14:49.240+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:41 policy-pap | [2024-02-19T23:14:49.240+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3, groupId=d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:41 policy-pap | [2024-02-19T23:14:50.157+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:41 policy-pap | [2024-02-19T23:14:50.158+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 23:16:41 policy-pap | [2024-02-19T23:14:50.161+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 3 ms 23:16:41 policy-pap | [2024-02-19T23:15:06.260+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:16:41 policy-pap | [] 23:16:41 policy-pap | [2024-02-19T23:15:06.261+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:41 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5efb7248-7e09-4e3f-a3fc-dc46b4b44102","timestampMs":1708384506219,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-pap | [2024-02-19T23:15:06.261+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"5efb7248-7e09-4e3f-a3fc-dc46b4b44102","timestampMs":1708384506219,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-pap | [2024-02-19T23:15:06.271+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:41 policy-pap | [2024-02-19T23:15:06.386+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate starting 23:16:41 policy-pap | [2024-02-19T23:15:06.386+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate starting listener 23:16:41 policy-pap | [2024-02-19T23:15:06.387+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate starting timer 23:16:41 policy-pap | [2024-02-19T23:15:06.387+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=dbfed9da-433f-414e-99bd-a5afc818016c, expireMs=1708384536387] 23:16:41 policy-pap | [2024-02-19T23:15:06.389+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate starting enqueue 23:16:41 policy-pap | [2024-02-19T23:15:06.389+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=dbfed9da-433f-414e-99bd-a5afc818016c, expireMs=1708384536387] 23:16:41 policy-pap | [2024-02-19T23:15:06.391+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"dbfed9da-433f-414e-99bd-a5afc818016c","timestampMs":1708384506334,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.391+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate started 23:16:41 policy-pap | [2024-02-19T23:15:06.419+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"dbfed9da-433f-414e-99bd-a5afc818016c","timestampMs":1708384506334,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.420+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"dbfed9da-433f-414e-99bd-a5afc818016c","timestampMs":1708384506334,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.421+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:41 policy-pap | [2024-02-19T23:15:06.421+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:41 policy-pap | [2024-02-19T23:15:06.445+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:41 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"87cf70eb-861c-4fe6-b963-0fa51b97d516","timestampMs":1708384506431,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-pap | [2024-02-19T23:15:06.453+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"87cf70eb-861c-4fe6-b963-0fa51b97d516","timestampMs":1708384506431,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup"} 23:16:41 policy-pap | [2024-02-19T23:15:06.453+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:41 policy-pap | [2024-02-19T23:15:06.457+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,407] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,409] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:41 kafka | [2024-02-19 23:14:45,411] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,497] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,520] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,525] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,526] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,528] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,548] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,549] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,549] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,549] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,550] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,557] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,558] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,558] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,558] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,559] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,565] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,566] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,566] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,566] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"dbfed9da-433f-414e-99bd-a5afc818016c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"0acc3e0c-854c-4c5d-90cc-11816db4d7f6","timestampMs":1708384506436,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.471+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopping 23:16:41 policy-pap | [2024-02-19T23:15:06.471+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopping enqueue 23:16:41 policy-pap | [2024-02-19T23:15:06.471+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopping timer 23:16:41 policy-pap | [2024-02-19T23:15:06.471+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=dbfed9da-433f-414e-99bd-a5afc818016c, expireMs=1708384536387] 23:16:41 policy-pap | [2024-02-19T23:15:06.471+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopping listener 23:16:41 policy-pap | [2024-02-19T23:15:06.471+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopped 23:16:41 policy-pap | [2024-02-19T23:15:06.475+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:41 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"dbfed9da-433f-414e-99bd-a5afc818016c","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"0acc3e0c-854c-4c5d-90cc-11816db4d7f6","timestampMs":1708384506436,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.475+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id dbfed9da-433f-414e-99bd-a5afc818016c 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate successful 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 start publishing next request 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange starting 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange starting listener 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange starting timer 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=35df8189-a6da-46ad-bacc-3ab4a6fe7616, expireMs=1708384536476] 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange starting enqueue 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange started 23:16:41 policy-pap | [2024-02-19T23:15:06.476+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=35df8189-a6da-46ad-bacc-3ab4a6fe7616, expireMs=1708384536476] 23:16:41 policy-pap | [2024-02-19T23:15:06.477+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35df8189-a6da-46ad-bacc-3ab4a6fe7616","timestampMs":1708384506335,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.487+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35df8189-a6da-46ad-bacc-3ab4a6fe7616","timestampMs":1708384506335,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.487+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:41 policy-pap | [2024-02-19T23:15:06.498+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"35df8189-a6da-46ad-bacc-3ab4a6fe7616","timestampMs":1708384506335,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.498+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:41 policy-pap | [2024-02-19T23:15:06.499+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:41 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35df8189-a6da-46ad-bacc-3ab4a6fe7616","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9a7c0a83-3afa-4f88-b0a9-20224f1c26a9","timestampMs":1708384506488,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.500+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"35df8189-a6da-46ad-bacc-3ab4a6fe7616","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"9a7c0a83-3afa-4f88-b0a9-20224f1c26a9","timestampMs":1708384506488,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange stopping 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange stopping enqueue 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange stopping timer 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=35df8189-a6da-46ad-bacc-3ab4a6fe7616, expireMs=1708384536476] 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange stopping listener 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange stopped 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpStateChange successful 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 start publishing next request 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate starting 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate starting listener 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate starting timer 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=a1a672fa-2c48-48e1-814b-49846bec95c1, expireMs=1708384536501] 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate starting enqueue 23:16:41 policy-pap | [2024-02-19T23:15:06.501+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate started 23:16:41 policy-pap | [2024-02-19T23:15:06.502+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a1a672fa-2c48-48e1-814b-49846bec95c1","timestampMs":1708384506492,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.504+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 35df8189-a6da-46ad-bacc-3ab4a6fe7616 23:16:41 policy-pap | [2024-02-19T23:15:06.510+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a1a672fa-2c48-48e1-814b-49846bec95c1","timestampMs":1708384506492,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.511+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:41 policy-pap | {"source":"pap-a92a4a8b-7770-4bfc-a655-2697c581a9e3","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a1a672fa-2c48-48e1-814b-49846bec95c1","timestampMs":1708384506492,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.511+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:41 policy-pap | [2024-02-19T23:15:06.511+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:41 policy-pap | [2024-02-19T23:15:06.520+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:41 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a1a672fa-2c48-48e1-814b-49846bec95c1","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"08b36924-eee8-4089-b8a2-3790ae49474f","timestampMs":1708384506513,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.520+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a1a672fa-2c48-48e1-814b-49846bec95c1 23:16:41 policy-pap | [2024-02-19T23:15:06.522+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:41 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a1a672fa-2c48-48e1-814b-49846bec95c1","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"08b36924-eee8-4089-b8a2-3790ae49474f","timestampMs":1708384506513,"name":"apex-16fd82d3-7dce-4d8c-bf24-21da0b696893","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:41 policy-pap | [2024-02-19T23:15:06.523+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopping 23:16:41 policy-pap | [2024-02-19T23:15:06.523+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopping enqueue 23:16:41 policy-pap | [2024-02-19T23:15:06.523+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopping timer 23:16:41 policy-pap | [2024-02-19T23:15:06.523+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a1a672fa-2c48-48e1-814b-49846bec95c1, expireMs=1708384536501] 23:16:41 policy-pap | [2024-02-19T23:15:06.523+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopping listener 23:16:41 policy-pap | [2024-02-19T23:15:06.523+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate stopped 23:16:41 policy-pap | [2024-02-19T23:15:06.528+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 PdpUpdate successful 23:16:41 policy-pap | [2024-02-19T23:15:06.528+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-16fd82d3-7dce-4d8c-bf24-21da0b696893 has no more requests 23:16:41 policy-pap | [2024-02-19T23:15:10.762+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:41 policy-pap | [2024-02-19T23:15:10.769+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:41 policy-pap | [2024-02-19T23:15:11.168+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:11.677+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:11.678+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:12.188+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:12.391+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:41 policy-pap | [2024-02-19T23:15:12.511+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:41 policy-pap | [2024-02-19T23:15:12.511+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:12.511+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:12.523+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-19T23:15:12Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-19T23:15:12Z, user=policyadmin)] 23:16:41 policy-pap | [2024-02-19T23:15:13.192+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:13.194+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:41 policy-pap | [2024-02-19T23:15:13.194+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:41 policy-pap | [2024-02-19T23:15:13.194+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:13.194+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:13.205+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-19T23:15:13Z, user=policyadmin)] 23:16:41 policy-pap | [2024-02-19T23:15:13.529+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 23:16:41 policy-pap | [2024-02-19T23:15:13.529+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:13.529+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:41 policy-pap | [2024-02-19T23:15:13.529+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:41 policy-pap | [2024-02-19T23:15:13.529+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:13.529+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:13.538+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-19T23:15:13Z, user=policyadmin)] 23:16:41 policy-pap | [2024-02-19T23:15:34.099+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:34.101+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:41 policy-pap | [2024-02-19T23:15:36.388+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=dbfed9da-433f-414e-99bd-a5afc818016c, expireMs=1708384536387] 23:16:41 policy-pap | [2024-02-19T23:15:36.478+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=35df8189-a6da-46ad-bacc-3ab4a6fe7616, expireMs=1708384536476] 23:16:41 kafka | [2024-02-19 23:14:45,566] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,574] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,575] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,575] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,575] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,575] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,585] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,586] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,586] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,586] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,586] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,592] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,592] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,592] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,593] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,593] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,605] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,606] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,606] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,606] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,606] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,613] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,614] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,614] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,614] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,614] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,621] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,621] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,621] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,621] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,621] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,629] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,630] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,630] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,630] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,630] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,636] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,636] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,636] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,636] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,637] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,644] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,644] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,644] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,644] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,645] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,655] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,656] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,656] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,656] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,656] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,664] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,664] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,664] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,664] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,664] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,671] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,672] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,672] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,672] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,672] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,680] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,681] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,681] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,681] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,681] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,688] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,689] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,689] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,689] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,689] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,696] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,697] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,697] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,697] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,697] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,704] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,705] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,705] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,705] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,705] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,714] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,715] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,715] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,715] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,715] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,723] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,723] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,723] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,723] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,723] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,733] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,733] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,733] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,734] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,734] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,744] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,745] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,745] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,745] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,745] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,751] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,751] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,751] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,751] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,752] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,758] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,759] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,759] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,759] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,759] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,799] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,799] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,800] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,800] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,800] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,805] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,805] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,805] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,805] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,805] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,810] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,811] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,811] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,811] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,811] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,817] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,817] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,817] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,817] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,817] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,822] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,822] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,822] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,822] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,823] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,830] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,831] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,831] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,831] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,831] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,838] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,839] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,839] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,839] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,839] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,845] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,848] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,848] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,848] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,848] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,852] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,853] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,853] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,853] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,853] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(TVE3Kq3BQlWiihp0MJOTdw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,859] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,859] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,859] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,859] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,859] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,866] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,867] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,867] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,867] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,867] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,876] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,876] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,876] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,877] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,877] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,881] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,882] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,882] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,882] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,882] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,888] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,888] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,889] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,889] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,889] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,894] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,894] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,894] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,894] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,894] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,900] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,900] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,900] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,900] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,900] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,906] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,906] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,906] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,906] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,907] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,914] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,914] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,914] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,914] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,914] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,922] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,922] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,922] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,922] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,922] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,928] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,928] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,928] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,928] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,928] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,942] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,942] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,942] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,942] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,942] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,950] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,950] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,950] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,950] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,950] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,956] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,956] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,956] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,957] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,957] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,962] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,963] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,963] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,963] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,963] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,972] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:41 kafka | [2024-02-19 23:14:45,973] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:41 kafka | [2024-02-19 23:14:45,973] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,973] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:41 kafka | [2024-02-19 23:14:45,973] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(LJ8qdtXjQImen0RiTLNUHA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,978] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,980] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:45,989] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,993] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,993] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,993] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,993] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,993] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,993] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,994] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,995] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:45,995] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:45,995] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,003] INFO [Broker id=1] Finished LeaderAndIsr request in 661ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,003] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,007] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,007] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,007] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,007] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,007] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,007] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,007] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,007] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,008] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,008] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,008] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,008] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,009] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,010] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,011] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,012] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:41 kafka | [2024-02-19 23:14:46,012] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=LJ8qdtXjQImen0RiTLNUHA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=TVE3Kq3BQlWiihp0MJOTdw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,020] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,021] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,022] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,022] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:41 kafka | [2024-02-19 23:14:46,094] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73 in Empty state. Created a new member id consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3-3d8c0b64-a932-4c61-8259-d6a7a58c73fa and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:46,101] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-cc9d25bb-0239-4ab2-ab72-09c9a8b909d4 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:46,115] INFO [GroupCoordinator 1]: Preparing to rebalance group d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73 in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3-3d8c0b64-a932-4c61-8259-d6a7a58c73fa with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:46,117] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-cc9d25bb-0239-4ab2-ab72-09c9a8b909d4 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:46,635] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 8a152ea0-3554-4e34-a917-801a2773d54e in Empty state. Created a new member id consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2-ed48d9f0-0d89-419b-b201-03b48a20b29c and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:46,639] INFO [GroupCoordinator 1]: Preparing to rebalance group 8a152ea0-3554-4e34-a917-801a2773d54e in state PreparingRebalance with old generation 0 (__consumer_offsets-19) (reason: Adding new member consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2-ed48d9f0-0d89-419b-b201-03b48a20b29c with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:49,127] INFO [GroupCoordinator 1]: Stabilized group d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73 generation 1 (__consumer_offsets-37) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:49,131] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:49,161] INFO [GroupCoordinator 1]: Assignment received from leader consumer-d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73-3-3d8c0b64-a932-4c61-8259-d6a7a58c73fa for group d0e7ca5a-884a-4f1a-a9f2-8a991f9f7b73 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:49,165] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-cc9d25bb-0239-4ab2-ab72-09c9a8b909d4 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:49,640] INFO [GroupCoordinator 1]: Stabilized group 8a152ea0-3554-4e34-a917-801a2773d54e generation 1 (__consumer_offsets-19) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:41 kafka | [2024-02-19 23:14:49,653] INFO [GroupCoordinator 1]: Assignment received from leader consumer-8a152ea0-3554-4e34-a917-801a2773d54e-2-ed48d9f0-0d89-419b-b201-03b48a20b29c for group 8a152ea0-3554-4e34-a917-801a2773d54e for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:41 ++ echo 'Tearing down containers...' 23:16:41 Tearing down containers... 23:16:41 ++ docker-compose down -v --remove-orphans 23:16:41 Stopping grafana ... 23:16:41 Stopping policy-apex-pdp ... 23:16:41 Stopping policy-pap ... 23:16:41 Stopping kafka ... 23:16:41 Stopping policy-api ... 23:16:41 Stopping mariadb ... 23:16:41 Stopping prometheus ... 23:16:41 Stopping compose_zookeeper_1 ... 23:16:41 Stopping simulator ... 23:16:42 Stopping grafana ... done 23:16:42 Stopping prometheus ... done 23:16:52 Stopping policy-apex-pdp ... done 23:17:02 Stopping simulator ... done 23:17:02 Stopping policy-pap ... done 23:17:03 Stopping mariadb ... done 23:17:03 Stopping kafka ... done 23:17:04 Stopping compose_zookeeper_1 ... done 23:17:12 Stopping policy-api ... done 23:17:12 Removing grafana ... 23:17:12 Removing policy-apex-pdp ... 23:17:12 Removing policy-pap ... 23:17:12 Removing kafka ... 23:17:12 Removing policy-api ... 23:17:12 Removing policy-db-migrator ... 23:17:12 Removing mariadb ... 23:17:12 Removing prometheus ... 23:17:12 Removing compose_zookeeper_1 ... 23:17:12 Removing simulator ... 23:17:13 Removing policy-db-migrator ... done 23:17:13 Removing mariadb ... done 23:17:13 Removing policy-api ... done 23:17:13 Removing policy-pap ... done 23:17:13 Removing policy-apex-pdp ... done 23:17:13 Removing compose_zookeeper_1 ... done 23:17:13 Removing kafka ... done 23:17:13 Removing simulator ... done 23:17:13 Removing grafana ... done 23:17:13 Removing prometheus ... done 23:17:13 Removing network compose_default 23:17:13 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:13 + load_set 23:17:13 + _setopts=hxB 23:17:13 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:13 ++ tr : ' ' 23:17:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:13 + set +o braceexpand 23:17:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:13 + set +o hashall 23:17:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:13 + set +o interactive-comments 23:17:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:13 + set +o xtrace 23:17:13 ++ echo hxB 23:17:13 ++ sed 's/./& /g' 23:17:13 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:13 + set +h 23:17:13 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:13 + set +x 23:17:13 + [[ -n /tmp/tmp.hHiucWoJXw ]] 23:17:13 + rsync -av /tmp/tmp.hHiucWoJXw/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:13 sending incremental file list 23:17:13 ./ 23:17:13 log.html 23:17:13 output.xml 23:17:13 report.html 23:17:13 testplan.txt 23:17:13 23:17:13 sent 909,954 bytes received 95 bytes 1,820,098.00 bytes/sec 23:17:13 total size is 909,409 speedup is 1.00 23:17:13 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:13 + exit 0 23:17:13 $ ssh-agent -k 23:17:13 unset SSH_AUTH_SOCK; 23:17:13 unset SSH_AGENT_PID; 23:17:13 echo Agent pid 2108 killed; 23:17:13 [ssh-agent] Stopped. 23:17:13 Robot results publisher started... 23:17:13 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:13 -Parsing output xml: 23:17:13 Done! 23:17:13 WARNING! Could not find file: **/log.html 23:17:13 WARNING! Could not find file: **/report.html 23:17:13 -Copying log files to build dir: 23:17:14 Done! 23:17:14 -Assigning results to build: 23:17:14 Done! 23:17:14 -Checking thresholds: 23:17:14 Done! 23:17:14 Done publishing Robot results. 23:17:14 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:14 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11429149025018042095.sh 23:17:14 ---> sysstat.sh 23:17:14 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10543696820756727648.sh 23:17:14 ---> package-listing.sh 23:17:14 ++ facter osfamily 23:17:14 ++ tr '[:upper:]' '[:lower:]' 23:17:14 + OS_FAMILY=debian 23:17:14 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:14 + START_PACKAGES=/tmp/packages_start.txt 23:17:14 + END_PACKAGES=/tmp/packages_end.txt 23:17:14 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:14 + PACKAGES=/tmp/packages_start.txt 23:17:14 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:14 + PACKAGES=/tmp/packages_end.txt 23:17:14 + case "${OS_FAMILY}" in 23:17:14 + dpkg -l 23:17:14 + grep '^ii' 23:17:14 + '[' -f /tmp/packages_start.txt ']' 23:17:14 + '[' -f /tmp/packages_end.txt ']' 23:17:14 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:14 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:14 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:14 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:14 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4629543479130392652.sh 23:17:14 ---> capture-instance-metadata.sh 23:17:14 Setup pyenv: 23:17:14 system 23:17:14 3.8.13 23:17:14 3.9.13 23:17:14 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:14 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xV1d from file:/tmp/.os_lf_venv 23:17:16 lf-activate-venv(): INFO: Installing: lftools 23:17:27 lf-activate-venv(): INFO: Adding /tmp/venv-xV1d/bin to PATH 23:17:27 INFO: Running in OpenStack, capturing instance metadata 23:17:27 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16356528361515615897.sh 23:17:27 provisioning config files... 23:17:27 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config13525009927047857758tmp 23:17:27 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:27 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:27 [EnvInject] - Injecting environment variables from a build step. 23:17:27 [EnvInject] - Injecting as environment variables the properties content 23:17:27 SERVER_ID=logs 23:17:27 23:17:27 [EnvInject] - Variables injected successfully. 23:17:27 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8626852073046479709.sh 23:17:27 ---> create-netrc.sh 23:17:27 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15757931854036156099.sh 23:17:27 ---> python-tools-install.sh 23:17:27 Setup pyenv: 23:17:27 system 23:17:27 3.8.13 23:17:27 3.9.13 23:17:27 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:27 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xV1d from file:/tmp/.os_lf_venv 23:17:29 lf-activate-venv(): INFO: Installing: lftools 23:17:37 lf-activate-venv(): INFO: Adding /tmp/venv-xV1d/bin to PATH 23:17:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4672028484582830046.sh 23:17:37 ---> sudo-logs.sh 23:17:37 Archiving 'sudo' log.. 23:17:37 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins18006336474448262746.sh 23:17:37 ---> job-cost.sh 23:17:37 Setup pyenv: 23:17:37 system 23:17:37 3.8.13 23:17:37 3.9.13 23:17:37 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:37 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xV1d from file:/tmp/.os_lf_venv 23:17:39 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:46 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:17:46 lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. 23:17:46 lf-activate-venv(): INFO: Adding /tmp/venv-xV1d/bin to PATH 23:17:46 INFO: No Stack... 23:17:46 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:46 INFO: Archiving Costs 23:17:46 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins17462469918840669672.sh 23:17:46 ---> logs-deploy.sh 23:17:46 Setup pyenv: 23:17:46 system 23:17:46 3.8.13 23:17:46 3.9.13 23:17:46 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:47 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-xV1d from file:/tmp/.os_lf_venv 23:17:48 lf-activate-venv(): INFO: Installing: lftools 23:17:57 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:17:57 python-openstackclient 6.5.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. 23:17:58 lf-activate-venv(): INFO: Adding /tmp/venv-xV1d/bin to PATH 23:17:58 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1583 23:17:58 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:17:59 Archives upload complete. 23:17:59 INFO: archiving logs to Nexus 23:18:00 ---> uname -a: 23:18:00 Linux prd-ubuntu1804-docker-8c-8g-6858 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:00 23:18:00 23:18:00 ---> lscpu: 23:18:00 Architecture: x86_64 23:18:00 CPU op-mode(s): 32-bit, 64-bit 23:18:00 Byte Order: Little Endian 23:18:00 CPU(s): 8 23:18:00 On-line CPU(s) list: 0-7 23:18:00 Thread(s) per core: 1 23:18:00 Core(s) per socket: 1 23:18:00 Socket(s): 8 23:18:00 NUMA node(s): 1 23:18:00 Vendor ID: AuthenticAMD 23:18:00 CPU family: 23 23:18:00 Model: 49 23:18:00 Model name: AMD EPYC-Rome Processor 23:18:00 Stepping: 0 23:18:00 CPU MHz: 2799.998 23:18:00 BogoMIPS: 5599.99 23:18:00 Virtualization: AMD-V 23:18:00 Hypervisor vendor: KVM 23:18:00 Virtualization type: full 23:18:00 L1d cache: 32K 23:18:00 L1i cache: 32K 23:18:00 L2 cache: 512K 23:18:00 L3 cache: 16384K 23:18:00 NUMA node0 CPU(s): 0-7 23:18:00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:00 23:18:00 23:18:00 ---> nproc: 23:18:00 8 23:18:00 23:18:00 23:18:00 ---> df -h: 23:18:00 Filesystem Size Used Avail Use% Mounted on 23:18:00 udev 16G 0 16G 0% /dev 23:18:00 tmpfs 3.2G 708K 3.2G 1% /run 23:18:00 /dev/vda1 155G 14G 142G 9% / 23:18:00 tmpfs 16G 0 16G 0% /dev/shm 23:18:00 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:00 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:00 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:00 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:00 23:18:00 23:18:00 ---> free -m: 23:18:00 total used free shared buff/cache available 23:18:00 Mem: 32167 850 25317 0 5998 30860 23:18:00 Swap: 1023 0 1023 23:18:00 23:18:00 23:18:00 ---> ip addr: 23:18:00 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:00 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:00 inet 127.0.0.1/8 scope host lo 23:18:00 valid_lft forever preferred_lft forever 23:18:00 inet6 ::1/128 scope host 23:18:00 valid_lft forever preferred_lft forever 23:18:00 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:00 link/ether fa:16:3e:83:c2:29 brd ff:ff:ff:ff:ff:ff 23:18:00 inet 10.30.106.151/23 brd 10.30.107.255 scope global dynamic ens3 23:18:00 valid_lft 85943sec preferred_lft 85943sec 23:18:00 inet6 fe80::f816:3eff:fe83:c229/64 scope link 23:18:00 valid_lft forever preferred_lft forever 23:18:00 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:00 link/ether 02:42:21:73:bd:8e brd ff:ff:ff:ff:ff:ff 23:18:00 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:00 valid_lft forever preferred_lft forever 23:18:00 23:18:00 23:18:00 ---> sar -b -r -n DEV: 23:18:00 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-6858) 02/19/24 _x86_64_ (8 CPU) 23:18:00 23:18:00 23:10:25 LINUX RESTART (8 CPU) 23:18:00 23:18:00 23:11:02 tps rtps wtps bread/s bwrtn/s 23:18:00 23:12:01 113.93 36.16 77.77 1711.42 26336.56 23:18:00 23:13:01 129.56 23.05 106.52 2761.41 32164.37 23:18:00 23:14:01 233.72 0.17 233.55 15.18 126676.25 23:18:00 23:15:01 338.26 13.36 324.90 816.00 45595.83 23:18:00 23:16:01 18.61 0.00 18.61 0.00 19628.95 23:18:00 23:17:01 26.48 0.03 26.45 4.40 20765.34 23:18:00 Average: 143.53 12.06 131.47 882.26 45264.68 23:18:00 23:18:00 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:00 23:12:01 30080952 31685620 2858268 8.68 69900 1844388 1451264 4.27 886648 1679988 166800 23:18:00 23:13:01 29298896 31668392 3640324 11.05 91876 2563196 1557500 4.58 989936 2302512 540264 23:18:00 23:14:01 26007948 31665088 6931272 21.04 138980 5652976 1454056 4.28 1022884 5388316 405792 23:18:00 23:15:01 23775656 29599552 9163564 27.82 155048 5784272 8860092 26.07 3267008 5297860 1364 23:18:00 23:16:01 23783596 29608232 9155624 27.80 155244 5784536 8846160 26.03 3259456 5295412 264 23:18:00 23:17:01 24036852 29887180 8902368 27.03 155728 5812556 7209460 21.21 3004072 5309380 208 23:18:00 Average: 26163983 30685677 6775237 20.57 127796 4573654 4896422 14.41 2071667 4212245 185782 23:18:00 23:18:00 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:00 23:12:01 lo 1.63 1.63 0.17 0.17 0.00 0.00 0.00 0.00 23:18:00 23:12:01 ens3 64.69 43.24 947.66 8.98 0.00 0.00 0.00 0.00 23:18:00 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:00 23:13:01 lo 6.27 6.27 0.59 0.59 0.00 0.00 0.00 0.00 23:18:00 23:13:01 ens3 167.57 110.78 4308.08 12.79 0.00 0.00 0.00 0.00 23:18:00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:00 23:13:01 br-4ead9628c4f6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:00 23:14:01 lo 7.12 7.12 0.71 0.71 0.00 0.00 0.00 0.00 23:18:00 23:14:01 ens3 1012.25 518.99 26633.37 37.93 0.00 0.00 0.00 0.00 23:18:00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:00 23:14:01 br-4ead9628c4f6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:00 23:15:01 vethf9c764c 77.04 91.53 41.97 23.20 0.00 0.00 0.00 0.00 23:18:00 23:15:01 vetha7f5a1d 0.50 0.72 0.05 0.30 0.00 0.00 0.00 0.00 23:18:00 23:15:01 lo 2.42 2.42 2.42 2.42 0.00 0.00 0.00 0.00 23:18:00 23:15:01 ens3 14.00 9.63 3.66 3.25 0.00 0.00 0.00 0.00 23:18:00 23:16:01 vethf9c764c 30.86 37.26 35.67 8.54 0.00 0.00 0.00 0.00 23:18:00 23:16:01 vetha7f5a1d 0.22 0.15 0.01 0.01 0.00 0.00 0.00 0.00 23:18:00 23:16:01 lo 5.83 5.83 1.35 1.35 0.00 0.00 0.00 0.00 23:18:00 23:16:01 ens3 3.30 3.47 0.70 1.03 0.00 0.00 0.00 0.00 23:18:00 23:17:01 vethf9c764c 0.22 0.42 0.09 0.07 0.00 0.00 0.00 0.00 23:18:00 23:17:01 lo 7.05 7.05 0.56 0.56 0.00 0.00 0.00 0.00 23:18:00 23:17:01 ens3 15.70 13.86 6.43 13.45 0.00 0.00 0.00 0.00 23:18:00 23:17:01 vethcf91af4 39.96 30.28 3.87 4.34 0.00 0.00 0.00 0.00 23:18:00 Average: vethf9c764c 18.06 21.59 12.99 5.31 0.00 0.00 0.00 0.00 23:18:00 Average: lo 5.06 5.06 0.97 0.97 0.00 0.00 0.00 0.00 23:18:00 Average: ens3 213.50 116.95 5333.44 12.92 0.00 0.00 0.00 0.00 23:18:00 Average: vethcf91af4 6.68 5.06 0.65 0.73 0.00 0.00 0.00 0.00 23:18:00 23:18:00 23:18:00 ---> sar -P ALL: 23:18:00 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-6858) 02/19/24 _x86_64_ (8 CPU) 23:18:00 23:18:00 23:10:25 LINUX RESTART (8 CPU) 23:18:00 23:18:00 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:18:00 23:12:01 all 9.93 0.00 0.84 2.31 0.04 86.88 23:18:00 23:12:01 0 1.51 0.00 0.59 0.66 0.00 97.24 23:18:00 23:12:01 1 13.10 0.00 1.02 0.49 0.05 85.34 23:18:00 23:12:01 2 9.87 0.00 0.90 0.27 0.02 88.94 23:18:00 23:12:01 3 14.32 0.00 1.14 0.41 0.03 84.10 23:18:00 23:12:01 4 16.58 0.00 0.76 1.94 0.03 80.69 23:18:00 23:12:01 5 19.84 0.00 1.36 0.99 0.07 77.75 23:18:00 23:12:01 6 2.71 0.00 0.34 0.17 0.02 96.76 23:18:00 23:12:01 7 1.58 0.00 0.61 13.57 0.03 84.21 23:18:00 23:13:01 all 10.21 0.00 1.37 2.37 0.04 86.01 23:18:00 23:13:01 0 12.17 0.00 1.52 0.25 0.03 86.02 23:18:00 23:13:01 1 18.54 0.00 1.41 1.29 0.08 78.68 23:18:00 23:13:01 2 23.43 0.00 2.43 2.17 0.05 71.92 23:18:00 23:13:01 3 9.90 0.00 1.34 0.03 0.03 88.70 23:18:00 23:13:01 4 5.56 0.00 1.09 0.33 0.02 93.00 23:18:00 23:13:01 5 1.85 0.00 0.89 13.10 0.05 84.11 23:18:00 23:13:01 6 1.23 0.00 0.90 0.12 0.02 97.73 23:18:00 23:13:01 7 8.97 0.00 1.41 1.72 0.07 87.83 23:18:00 23:14:01 all 11.41 0.00 4.99 6.94 0.06 76.60 23:18:00 23:14:01 0 9.94 0.00 4.66 5.74 0.05 79.61 23:18:00 23:14:01 1 11.85 0.00 5.20 6.37 0.07 76.51 23:18:00 23:14:01 2 12.29 0.00 4.34 0.68 0.05 82.64 23:18:00 23:14:01 3 11.29 0.00 4.98 0.27 0.08 83.37 23:18:00 23:14:01 4 11.25 0.00 5.56 0.10 0.05 83.04 23:18:00 23:14:01 5 11.17 0.00 4.08 11.39 0.07 73.29 23:18:00 23:14:01 6 11.67 0.00 5.09 2.72 0.03 80.48 23:18:00 23:14:01 7 11.81 0.00 5.98 28.43 0.07 53.71 23:18:00 23:15:01 all 27.72 0.00 3.53 2.78 0.08 65.89 23:18:00 23:15:01 0 30.38 0.00 4.14 1.66 0.07 63.76 23:18:00 23:15:01 1 27.53 0.00 3.14 0.86 0.08 68.39 23:18:00 23:15:01 2 25.93 0.00 3.39 3.92 0.10 66.66 23:18:00 23:15:01 3 34.72 0.00 4.47 0.92 0.07 59.82 23:18:00 23:15:01 4 28.29 0.00 3.22 4.09 0.08 64.32 23:18:00 23:15:01 5 30.58 0.00 3.95 2.96 0.08 62.43 23:18:00 23:15:01 6 20.87 0.00 2.91 1.16 0.08 74.97 23:18:00 23:15:01 7 23.44 0.00 3.04 6.63 0.10 66.78 23:18:00 23:16:01 all 4.14 0.00 0.38 1.04 0.07 94.38 23:18:00 23:16:01 0 6.91 0.00 0.62 0.00 0.10 92.38 23:18:00 23:16:01 1 6.13 0.00 0.48 0.02 0.08 93.29 23:18:00 23:16:01 2 2.54 0.00 0.23 8.00 0.08 89.14 23:18:00 23:16:01 3 2.83 0.00 0.27 0.08 0.07 96.75 23:18:00 23:16:01 4 3.14 0.00 0.30 0.17 0.07 96.33 23:18:00 23:16:01 5 3.24 0.00 0.28 0.02 0.07 96.39 23:18:00 23:16:01 6 3.74 0.00 0.50 0.02 0.05 95.69 23:18:00 23:16:01 7 4.59 0.00 0.35 0.00 0.05 95.01 23:18:00 23:17:01 all 1.15 0.00 0.34 1.15 0.06 97.30 23:18:00 23:17:01 0 0.98 0.00 0.33 0.05 0.08 98.55 23:18:00 23:17:01 1 0.83 0.00 0.32 0.00 0.03 98.82 23:18:00 23:17:01 2 1.57 0.00 0.45 8.05 0.05 89.89 23:18:00 23:17:01 3 1.17 0.00 0.22 0.18 0.05 98.38 23:18:00 23:17:01 4 0.78 0.00 0.45 0.45 0.05 98.26 23:18:00 23:17:01 5 1.39 0.00 0.23 0.35 0.07 97.96 23:18:00 23:17:01 6 0.80 0.00 0.32 0.08 0.03 98.77 23:18:00 23:17:01 7 1.71 0.00 0.45 0.02 0.07 97.76 23:18:00 Average: all 10.74 0.00 1.90 2.76 0.06 84.54 23:18:00 Average: 0 10.33 0.00 1.97 1.39 0.06 86.26 23:18:00 Average: 1 12.98 0.00 1.92 1.49 0.07 83.54 23:18:00 Average: 2 12.59 0.00 1.95 3.86 0.06 81.53 23:18:00 Average: 3 12.34 0.00 2.06 0.32 0.06 85.22 23:18:00 Average: 4 10.90 0.00 1.89 1.18 0.05 85.98 23:18:00 Average: 5 11.31 0.00 1.79 4.79 0.07 82.04 23:18:00 Average: 6 6.83 0.00 1.67 0.71 0.04 90.75 23:18:00 Average: 7 8.67 0.00 1.96 8.33 0.06 80.97 23:18:00 23:18:00 23:18:00